Ex-OpenAI Employee Just Revealed it ALL!

TheAIGRID
8 Jun 202452:44

Summary

TLDRThe video script discusses Leopold Ashen Brenner's insights on AGI's imminent arrival, predicting superintelligence by 2027 and its profound societal impacts. Brenner, a former OpenAI employee, posits that AI will surpass human cognitive abilities, automate AI research, and potentially lead to uncontrollable intelligence explosions. The script also addresses the urgent need for robust AI safety and security measures to prevent misuse and catastrophic alignment failures, emphasizing the high stakes of the global race towards AGI.

Takeaways

  • 🧠 Leopold Ashen Brener, a former OpenAI employee, predicts significant advancements in AI, suggesting that by the end of the decade, we could achieve true superintelligence.
  • πŸ“ˆ The script highlights the exponential growth in AI capabilities, with the transition from GPT-2 to GPT-4 representing a leap from preschooler to high schooler levels of intelligence in just four years.
  • πŸ’‘ Brener emphasizes the importance of 'situational awareness' in understanding the rapid development of AI and its potential impact on society and the economy.
  • πŸ”’ The document outlines the stages necessary for reaching AGI (Artificial General Intelligence) and predicts that by 2027, AI models could perform the work of an AI researcher, leading to recursive self-improvement.
  • πŸ“Š The script discusses the importance of trend analysis in predicting AI capabilities, suggesting that linear progression in computational power and algorithmic efficiency will lead to AGI by 2027.
  • πŸš€ The potential for AI to automate its own research is identified as a critical milestone that could trigger an 'intelligence explosion', rapidly advancing AI beyond human levels.
  • πŸ›‘οΈ National security implications are underscored, with the possibility that AGI could be used to create unprecedented military advantages and the need for robust security measures to protect AI secrets.
  • 🌐 The script raises concerns about the potential misuse of AGI, including the risk of it falling into the wrong hands or being used to exert authoritarian control.
  • πŸ” The importance of aligning AGI with human values and ensuring its safety is highlighted, noting that current methods of supervision may not scale to superhuman AI systems.
  • 🏁 The final takeaway emphasizes the urgency and importance of the coming years in the race to AGI, suggesting that the next decade will be decisive for the future trajectory of AI and society.

Q & A

  • Who is Leopold Ashen brener and what is his significance in the context of AGI?

    -Leopold Ashen brener is a former OpenAI employee who was allegedly fired for leaking internal documents. His significance lies in his detailed insights and predictions about the path to AGI (Artificial General Intelligence), which he shared after his departure from OpenAI, providing a unique perspective on the future of AI development.

  • What does the term 'situational awareness' refer to in the context of Leopold Ashen brener's document?

    -In the context of Leopold Ashen brener's document, 'situational awareness' refers to the understanding and awareness of the current and future developments in AI, particularly the progress towards AGI. It implies having a clear view of the trajectory of AI advancements and the implications they will have on society and the world.

  • What is the projected timeline for AGI according to Ashen brener's insights?

    -According to Ashen brener's insights, AGI could be achieved by 2027. He suggests that by this time, AI systems will have advanced to the point where they can outpace human intelligence and perform tasks equivalent to an AI researcher.

  • What are the implications of AGI for national security and military power?

    -The implications of AGI for national security and military power are significant. AGI could potentially provide a decisive and overwhelming military advantage, enabling rapid technological progress and military revolutions. It could lead to the development of advanced weaponry and strategies that would be difficult for non-AGI nations to counter.

  • What is the importance of algorithmic efficiencies in the progress towards AGI?

    -Algorithmic efficiencies are crucial in the progress towards AGI as they represent improvements in the algorithms themselves, which can lead to significant gains in AI capabilities. These efficiencies can compound over time, leading to exponential increases in the performance of AI systems.

  • How does Ashen brener describe the potential economic impact of AGI?

    -Ashen brener describes the potential economic impact of AGI as transformative, suggesting that it could lead to an unprecedented rate of economic growth. The automation of cognitive jobs and the acceleration of technological innovation could significantly compress the timeline for economic progress.

  • What are the security concerns raised by Ashen brener regarding AGI research?

    -Ashen brener raises concerns about the lack of security protocols in AI labs, which could make it easy for nation-states or other actors to steal AGI secrets. He warns that this could lead to a loss of lead in the AGI race and potentially put the world at risk if AGI technology falls into the wrong hands.

  • What is the 'intelligence explosion' mentioned in the script, and what are its potential consequences?

    -The 'intelligence explosion' refers to the self-accelerating loop of AI improvement where AGI systems become smarter and more capable at an ever-increasing rate. The potential consequences are vast, including the rapid advancement of technology, economic growth, and military capabilities, but also risks such as loss of control and potential misuse of power.

  • How does Ashen brener discuss the potential for AGI to be integrated into critical systems, including military systems?

    -Ashen brener discusses the potential for AGI to be integrated into critical systems as a double-edged sword. While it could lead to significant advancements and efficiencies, it also poses significant risks if not properly aligned with human values and interests. The integration of AGI into military systems, in particular, could have far-reaching implications for security and power dynamics.

  • What are the challenges associated with aligning AGI with human values and interests?

    -Aligning AGI with human values and interests is challenging because as AI systems become superhuman, it becomes increasingly difficult for humans to understand and supervise their behavior. This is known as the alignment problem, and it raises concerns about whether AGI systems can be trusted to act in ways that are beneficial to humans.

Outlines

00:00

🧠 AGI Predictions and Technological Advancements

Leopold Ashen brener, a former OpenAI employee, shares his insights on the path to AGI (Artificial General Intelligence) and its implications. He predicts that by 2025-2026, AI will outpace college graduates, and by the end of the decade, we will witness superintelligence. The document outlines the exponential growth in computational power and the potential for AI to become smarter than humans, emphasizing the importance of situational awareness and the rapid evolution from GPT-2 to GPT-4 models.

05:00

πŸ“ˆ Projected Growth and Implications of AI Development

This section discusses the expected growth in AI capabilities, suggesting that by 2027-2028, we could have AI systems capable of automated AI research. The implications are stark, as this could lead to recursive self-improvement and superintelligence. The document highlights the importance of understanding the trends and magnitudes in AI development, and the potential for AI to surpass human intelligence in various domains.

10:01

πŸ”’ Benchmarks and the Rapid Progress in AI

The script talks about the diminishing number of benchmarks capable of challenging AI models, as they continue to improve at an astonishing rate. It provides examples of how GPT models have evolved, with GPT-4 showing capabilities akin to a high school student and even hints at the first sparks of AGI. The rapid progress in AI is demonstrated through test scores and the ability of AI to solve complex problems, which is both fascinating and potentially concerning.

15:02

πŸ’‘ The Magic of Deep Learning and Its Consistent Progress

Deep learning's effectiveness and consistent trend lines are highlighted, showing that despite skepticism, progress in AI has been remarkable. The script discusses the potential for AI to unlock significant latent capabilities through tools like Chain of Thought and Scaffolding, and how algorithmic efficiencies are a crucial yet underrated factor in AI's advancement.

20:05

πŸš€ The Acceleration Towards AGI and Unleashing National Security Forces

The document predicts a significant acceleration in AI capabilities, suggesting that by the end of the decade, we will see superintelligence and the unleashing of national security forces not seen in half a century. It emphasizes the importance of understanding the current state of AI and the potential for AGI to arise from the ongoing advancements in technology.

25:05

🌐 The Global Impact and Economic Growth Post-Superintelligence

This section speculates on the immense impact of superintelligence on a global scale, including the potential for rapid technological progress and military revolutions. It raises the question of how the global economy might grow in the wake of superintelligence, suggesting that the doubling time could decrease significantly, leading to an era of unprecedented growth and change.

30:06

πŸ›‘οΈ National Security and the Race for Superintelligence

The script addresses the critical issue of national security in the race for superintelligence, warning that current AI labs may not be taking security seriously enough. It suggests that the lead in the AGI race could be lost due to lack of security, potentially allowing authoritarian states to gain a significant advantage and threatening global safety.

35:07

πŸ›οΈ The Future of Governance and the Risks of Misaligned Superintelligence

The document discusses the future of governance in the context of superintelligence, highlighting the risks associated with misaligned AI systems. It emphasizes the technical challenges of controlling AI systems that are smarter than humans and the potential for these systems to act in ways that are not in our best interests, especially if they become integrated into critical systems like military infrastructure.

40:09

πŸ•ŠοΈ The Importance of Freedom and Democracy in the Age of Superintelligence

The final paragraph stresses the importance of freedom and democracy as superintelligence becomes a reality. It warns of the potential for dictatorships to wield unprecedented power through AI-controlled systems, creating a permanent and unchallengeable rule. The document calls for the free world to prevail and for the importance of aligning superintelligence with human values to ensure a future that upholds democratic principles.

Mindmap

Keywords

πŸ’‘AGI (Artificial General Intelligence)

AGI refers to a form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human. In the video's theme, AGI is central to the discussion of technological advancement and its implications for the future. The script mentions predictions of AGI's arrival by 2027 and its potential to automate AI research, leading to an 'intelligence explosion'.

πŸ’‘Leopold Ashen Brener

Leopold Ashen Brener is a figure who previously worked at OpenAI and was allegedly let go due to leaking internal documents. His insights on the trajectory towards AGI are highlighted as particularly noteworthy in the script, emphasizing the significance of his contributions to the discourse on AI development.

πŸ’‘Situational Awareness

Situational awareness, in the context of the video, pertains to the understanding of how the world is changing, particularly in relation to AI development. The script discusses a document titled 'Situational Awareness' which outlines the stages and predictions for the progression towards AGI, indicating the importance of being aware of the rapid changes in AI capabilities.

πŸ’‘Compute Clusters

Compute clusters are groups of computers working together to perform large-scale computations. The script mentions the progression from '10 billion compute clusters' to 'hundred billion do compute clusters,' illustrating the exponential growth in computational power needed to achieve AGI, which is a critical component in the race to develop advanced AI.

πŸ’‘Algorithmic Efficiencies

Algorithmic efficiencies refer to improvements in the performance of algorithms, often through optimization or innovation. The script discusses the significant role that these efficiencies play in the advancement of AI, noting that they are 'dramatically underrated' and can lead to substantial gains in AI capabilities within a short timeframe.

πŸ’‘Unhobbling Gains

Unhobbling gains describe the improvements in AI performance that result from removing constraints or limitations on AI models. The script uses this term to discuss how small algorithmic tweaks can unlock much greater capabilities in AI systems, leading to significant leaps in their effectiveness and intelligence.

πŸ’‘Recursive Self-Improvement

Recursive self-improvement is a concept where an AI system can improve its own algorithms, leading to rapid and continuous enhancement of its capabilities. The script suggests that once AI systems can automate AI research, this process will lead to an 'intense feedback loop' of recursive self-improvement, potentially culminating in AGI.

πŸ’‘Intelligence Explosion

An intelligence explosion refers to a hypothetical scenario where an AI's ability to improve itself rapidly outpaces human ability to understand or control it. The script warns of the potential dangers of such an event, suggesting that AGI could lead to 'superintelligence' with unprecedented power and implications.

πŸ’‘AI Alignment

AI alignment is the problem of ensuring that AI systems act in a way that is beneficial to humans and aligned with human values. The script discusses the difficulty of aligning superintelligent AI, suggesting that as AI systems become more advanced, our ability to understand and control them diminishes, leading to potential misalignment.

πŸ’‘Espionage

Espionage in the context of the video refers to the act of spying or using covert methods to obtain secret information. The script highlights concerns about the security of AI research and the potential for state actors to steal AGI secrets through espionage, which could have serious implications for the balance of power in the AI race.

πŸ’‘Superintelligence

Superintelligence refers to an AI system that surpasses human intelligence in virtually all domains of cognitive endeavor. The script discusses the potential timeline for the development of superintelligence, suggesting that once AGI is achieved, the transition to superintelligence could be rapid and lead to a world where AI systems are 'unimaginably powerful' and have the ability to revolutionize every aspect of society.

Highlights

Leopold Ashen Brener, previously of OpenAI, predicts a decade of rapid AGI development with profound global implications.

By 2025-2026, AGI is expected to outpace college graduates in cognitive abilities, leading to superintelligence by the end of the decade.

National security measures not seen for half a century will be unleashed, indicating the seriousness of the AI advancements.

The document 'Situational Awareness' provides a detailed roadmap for the progression towards AGI, emphasizing the importance of understanding the trajectory.

GPT models have shown exponential growth, with GPT-4 in 2023 demonstrating high school level intelligence and capabilities.

The potential for automated AI research by 2027 could lead to a significant leap in AGI capabilities, as it would enable recursive self-improvement.

Algorithmic efficiencies and un-hobling of AI models are expected to drive substantial gains in AI capabilities.

Benchmarks for assessing AI are becoming obsolete as models like GPT-4 are already achieving high scores on traditional tests.

Deep learning's consistent progress suggests that the transition from GPT-4 to AGI could be rapid, with implications for economic and military power.

The cost of running AI models has decreased dramatically, making advanced AI more accessible and accelerating development.

The transition from AGI to superintelligence could be swift, with AI systems automating research and compressing decades of progress into years.

The importance of securing AGI development against espionage and unauthorized access to prevent the misuse of technology.

The potential for a superintelligence-led military revolution, with AI-driven systems providing unprecedented strategic advantages.

The challenge of aligning superintelligence with human values, as the complexity of AI behavior may become unfathomable to humans.

The potential risks of integrating AI into critical systems without proper safety mechanisms, including the possibility of catastrophic failures.

The document calls for a reevaluation of security protocols in AI labs to prevent leaks and ensure responsible development.

The possibility of a future where superintelligence could be used as a tool for authoritarian control, emphasizing the importance of democratic oversight.

Transcripts

play00:00

so Leopold Ashen brener is someone who

play00:03

used to work at openi until he was quote

play00:06

unquote fired for leaking internal

play00:08

documents now I do want to state that

play00:11

this video is arguably one of the most

play00:14

important videos because he details the

play00:17

decade ahead for how many of the

play00:19

companies around the world are going to

play00:21

get to AGI and his insights are like no

play00:25

other he made this tweet stating that

play00:27

virtually nobody is pricing in what's

play00:29

coming in a and he made an entire

play00:32

document about the stages that we will

play00:35

need to take in order to get to AGI and

play00:37

some of the things that you're going to

play00:39

witness in the coming years I think you

play00:41

should at least watch the first 10

play00:42

minutes of this video because it is

play00:44

remarkably insightful into some of the

play00:47

things that he is predicting so there

play00:49

are seven different sections and I've

play00:50

read this thing from top to bottom at

play00:52

least three times and I'm going to give

play00:54

you guys the most insightful sections

play00:56

from this entire essay because I do

play00:58

believe that this is remarkable document

play01:01

that I think everyone needs to pay

play01:02

attention to so without wasting any more

play01:04

time let's get into situational

play01:06

awareness the decade ahead so he has

play01:09

this introduction here where he talks

play01:11

about how the Talk of the Town has

play01:13

shifted from 10 billion compute clusters

play01:16

to hundred billion do compute clusters

play01:19

to even trillion doll clusters and every

play01:22

6 months another zero is added to the

play01:24

boardroom plans the AGI race has begun

play01:27

we are building machines that can think

play01:29

and reason and by 2025 to 2026 these

play01:33

machines will outpace college graduates

play01:36

and by the end of the decade they will

play01:38

be smarter than you or I and we will

play01:40

have super intelligence in the true

play01:42

sense of the word I'm going to say that

play01:44

again by the end of the decade okay we

play01:47

will have superintelligence in the

play01:49

truest sense of the word along with the

play01:52

way National Security Forces not seen in

play01:55

half a century will be Unleashed and

play01:57

before long the project will be on these

play01:59

are some very fascinating predictions

play02:01

but just trust me once we get into some

play02:03

of the charts and some of the data that

play02:05

he's been analyzing I think it really

play02:08

does make sense and this is why this

play02:09

document is called situational awareness

play02:11

just read this part before we get into

play02:13

everything he says before long the world

play02:15

will wake up but right now there are

play02:17

perhaps a few hundred people most of

play02:19

them in San Francisco and the AI Labs

play02:22

that actually have situational awareness

play02:25

through whatever peculiar forces or fate

play02:27

I have found myself amongst them and

play02:29

this this is why this document is really

play02:31

important because information like this

play02:33

we're really lucky that people could

play02:35

leave a company like open ey and then

play02:37

publish a piece of information which

play02:39

gives us the details on how

play02:41

superintelligence is likely to arise and

play02:44

when that system is likely to arise so

play02:46

this is the section one from gp4 to AGI

play02:49

counting the orders of magnitudes so

play02:52

when you see o that's what it stands for

play02:55

so he clearly states here his AGI

play02:57

prediction AGI by 202 27 is strikingly

play03:02

plausible gpt2 to GPT 4 took us from

play03:05

preschooler to Smart Hall high schooler

play03:08

abilities in Just 4 years and if we

play03:11

trace the trend lines of compute

play03:13

algorithmic efficiencies and un hobbling

play03:15

of gains we should expect another

play03:18

preschooler to high schooler size

play03:20

qualitative Jump by 2027 now this is

play03:23

where we get into our first very

play03:25

important chart because this shows us

play03:27

exactly where things may go he says I

play03:31

make the following claim it is

play03:32

strikingly plausible that by 2027 models

play03:36

will be able to do the work of an AI

play03:38

researcher SL engineer that doesn't

play03:41

require believing in sci-fi it just

play03:43

requires in believing in straight lines

play03:45

on a graph what we can see here is a

play03:47

graph of the base scale up of effective

play03:50

compute counting gpt2 all the way up to

play03:53

GPT 4 and looking at the effective

play03:56

compute that we're going to continue to

play03:58

scale up now one thing that is

play04:00

fascinating from here is that I think

play04:02

there is going to be an even steeper

play04:04

curve for this the reason I state that

play04:07

is because During the period period from

play04:09

2022 to 2023 there was something that I

play04:12

would like to call an awareness okay

play04:15

this period right here marks the birth

play04:17

of gpt3 to GPT 4 and this put a real

play04:22

giant spectacle on the AI era gpt3 and

play04:26

gbt 4 weren't just research products I

play04:29

mean gbt 3 was but gbt 4 and chat gbt

play04:32

3.5 were actual products that were

play04:36

available for the public and since then

play04:39

we've seen an explosion in terms of how

play04:41

many people are now intrigued by Ai and

play04:43

how many different companies and now

play04:46

piling billions of dollars and billions

play04:48

of resources into the technology into

play04:51

the compute clusters just so that they

play04:54

can capture all of the economic value

play04:56

that's going to be happening during this

play04:57

area which is why I do believe that it

play05:00

wouldn't be surprising if During the

play05:02

period from 2024 to 2028 we do have a

play05:07

lot more growth than we've had in this

play05:08

period which means that having an

play05:11

automated AI research engineer by 2027

play05:14

to 2028 is not something that is far far

play05:18

off because if we're just looking at the

play05:20

straight lines and the effect of compute

play05:22

then this is definitely where we could

play05:24

get to and the implications of this are

play05:27

quite Stark because if we can have an

play05:29

automated AI resesarch engineer that

play05:32

means that it wouldn't take that long to

play05:34

get to Super intelligence after that

play05:36

because if we can automate AI research

play05:39

then all bets are off we're able to

play05:41

effectively recursively self-improve but

play05:44

just without that crazy loop that makes

play05:46

super intelligence explode now here's

play05:48

where he States one of the things that I

play05:49

think is really really important to

play05:51

understand I stated this in a video

play05:53

before this document was released but

play05:55

it's glad to see that someone else is

play05:56

ushering one of the same concerns that I

play05:58

originally thought of he stated that the

play06:01

next generation of models has been in

play06:03

the oven leading some to Proclaim that

play06:05

stagnation and that deep learning is

play06:07

hitting a wall but by counting the

play06:09

ordance of magnitude we get a Peak at

play06:11

what we should actually expect in a

play06:13

video around 3 weeks ago I clearly

play06:15

stated that look things are slowing down

play06:18

externally but things are not slowing

play06:20

down internally at all just because some

play06:22

of the top AI Labs may not have

play06:24

presented their most recent research

play06:26

that doesn't mean that breakthroughs

play06:27

aren't being made every single month he

play06:29

States here that while the inference is

play06:31

simple the implication is striking

play06:33

another jump like that very well could

play06:36

take us to AGI to models as smart as

play06:39

phds or experts that can work beside us

play06:41

as a coworker perhaps most importantly

play06:44

if these AI systems could automate AI

play06:46

research itself that would set intense

play06:49

feedback loops and that's of course

play06:51

where we get AI researchers to make

play06:53

breakthroughs in AI research then we

play06:55

apply those breakthroughs to the AI

play06:57

systems they become smarter and then the

play06:59

loop continues from there basically

play07:01

recursive self-improvement but on a

play07:03

slower scale and here he clearly states

play07:06

even now barely anyone is pricing this

play07:09

in but the situational Awareness on AI

play07:11

isn't actually that hard once you step

play07:14

back and look at the trends if you keep

play07:15

being surprised by AI capabilities just

play07:18

start counting the orders of magnitudes

play07:20

so here's where we talk about the last

play07:23

four years so you can see he speaks

play07:25

about gpt2 to GPT 4 gpt2 was essentially

play07:28

like a pre schooler while it can string

play07:31

together a few plausible sentences and

play07:33

these are the gbt2 examples people found

play07:35

very impressive at the time but yet it

play07:37

could barely count to five without

play07:39

getting tripped up then of course we had

play07:41

gbt 3 which was in 2020 and this was as

play07:44

smart as an elementary Schooler and this

play07:47

was something that once again impressed

play07:48

people quite a lot and of course this is

play07:50

where we get to GPT 4 in 2023 and this

play07:54

is where we get a smart high schooler

play07:56

while it can write some pretty

play07:57

sophisticated code and iterative debug

play08:00

it can write intelligently and

play08:01

sophisticatedly about complicated

play08:03

subjects it can reason through difficult

play08:05

high school competition math and it's

play08:07

beating the vast majority of high

play08:09

schoolers on whatever test we give it

play08:11

and remember there was the Sparks of AGI

play08:13

paper which showed some capabilities

play08:15

that showed us that we weren't too far

play08:17

away from AGI and that this gp4 level

play08:20

system were the first initial Sparks of

play08:23

artificial general intelligence the

play08:25

thing is here he clearly states and I'm

play08:27

glad he's stating this because a lot of

play08:28

people don't realized this that the

play08:30

limitation comes down to obvious ways

play08:32

that models are still hobbled and

play08:34

basically he's talking about the way

play08:35

that models are used and the current

play08:37

Frameworks that they have the raw

play08:39

intelligence behind the model the raw

play08:41

cognitive capabilities of these models

play08:43

if you even want to call it that as

play08:45

artificially constrained and basically

play08:47

in the future if you calculate the fact

play08:50

that these are going to be unconstrained

play08:51

in the future it's going to be very

play08:53

fascinating on how that raw intelligence

play08:55

applies across different applications

play08:57

and one of the clear things that I think

play08:59

that that most people aren't realizing

play09:00

is that we're running out of benchmarks

play09:02

as an anecdote my friends Dan and Colin

play09:05

made a benchmark called the MML u a few

play09:07

years ago in 2020 they hoped to finally

play09:10

make a benchmark that would stand the

play09:11

test of time equivalent to all the

play09:13

hardest exams we give high school and

play09:15

college students just 3 years later

play09:18

models like GPT 4 and Gemini get around

play09:20

90% And then of course GPT 4 mostly

play09:23

cracks all the standard high school and

play09:26

college aptitude tests and you can see

play09:28

here the test scores of AI systems on

play09:30

various capabilities relative to Human

play09:32

Performance you can see that in the

play09:34

recent years there have been a stark

play09:37

Stark level of increases here it's

play09:39

absolutely crazy with as to how many

play09:41

different areas that AI is increasing in

play09:43

terms of the capabilities it's really

play09:46

really fascinating to see and also

play09:48

potentially quite concerning now one of

play09:50

the things that most people did actually

play09:52

miss about going from GPT 4 to AGI was a

play09:54

benchmark that actually did shock me so

play09:57

there is essentially this Benchmark

play09:58

called the math benchmark a set of

play10:01

difficult mathematic problems from a

play10:03

high school math competitions and when

play10:05

the Benchmark was released in 2021 gpt3

play10:08

only got 5% and basically the crazy

play10:10

thing about this was that researchers

play10:12

predicted at that time stating to have

play10:14

more traction on mathematical problem

play10:16

solving we will likely need new

play10:18

algorithmic advancements from the

play10:19

broader research community and we're

play10:21

going to need fundamental new

play10:22

breakthroughs to solve maths or as they

play10:24

thought they predicted minimal progress

play10:26

over the coming years but by mid 2022 2

play10:29

we got to 50% accuracy and basically now

play10:33

with the recent math Gemini 1.5 Pro we

play10:35

know that this is now at 90% which is

play10:39

absolutely incredible and here's

play10:41

something that you can clearly

play10:42

screenshot and share to your friends or

play10:44

colleagues or whatever it is whatever

play10:45

kind of community that you might be in

play10:47

but you can see that the performance on

play10:49

the common exams perent how compared to

play10:51

human test takers we can see that GPT 4

play10:54

ranks above 90% for pretty much all of

play10:57

them except calculus and chemistry which

play10:59

is a remarkable feat when we went from

play11:02

GPT 3 to GPT 4 in such a short amount of

play11:05

time this is a true true jumping

play11:08

capabilities that many people just

play11:10

simply wouldn't have expected now here's

play11:12

where we starting to get to some of the

play11:13

predictions that we can really start to

play11:15

make based on the nature of deep

play11:18

learning so essentially the magic of

play11:20

deep learning is that it just works and

play11:22

the trend lines have been astonishingly

play11:24

consistent despite the naysayers at

play11:25

every turn we can see here that this are

play11:28

screenshots from from the scaling

play11:30

compute in the open AI Sora technology

play11:33

and at each level we can see an increase

play11:35

in the quality and consistency the base

play11:38

compute results in a pretty terrible

play11:40

image/ video four times compute results

play11:43

in something that is pretty coherent and

play11:45

consistent but 30 times compute is

play11:48

something that is remarkable in terms of

play11:50

the quality consistency and the level of

play11:53

video that we do get which shows us that

play11:55

these trend lines are very very

play11:57

consistent and he says if we can

play11:59

reliably count the orders of magnitude

play12:01

that we're going to be training these

play12:02

models we can therefore extrapolate the

play12:05

capability improvements and that's how

play12:07

some people actually saw the GPT 4 level

play12:10

of capabilities coming and one of the

play12:12

things that he talks about is of course

play12:14

things like Chain of Thought tools and

play12:16

Scaffolding and therefore we can unlock

play12:18

significant latent capabilities

play12:20

basically when we have GPT 4 or whatever

play12:23

the base cognitive capabilities are for

play12:25

this architecture and then we can use

play12:27

that to unlock latent capabilities by

play12:29

adding different steps in front of that

play12:31

system so for example when you use gp4

play12:34

with Chain of Thought reasoning you

play12:35

significantly improve your ability to

play12:37

answer certain questions in different

play12:39

scenarios and it's things like that

play12:41

where you can unlock more knowledge from

play12:44

the system by using different ways to

play12:46

interact with it which means that the

play12:48

raw data behind the system and the raw

play12:49

knowledge is a lot bigger than people

play12:51

think so this is what you call on

play12:53

hobbling Gams now one of the things

play12:55

that's really important and this is

play12:57

something that doesn't get enough

play12:58

attention but this is going to make up a

play13:00

lot of the gains that you won't see uh

play13:03

and this is the algorithmic efficiencies

play13:05

so whilst massive investments into

play13:07

compute get all the attention

play13:09

algorithmic progress is similarly an

play13:11

important driver of progress and is

play13:13

dramatically underrated to see just how

play13:16

big of a deal algorithmic progress can

play13:18

be consider the following illustration

play13:20

this one right here the drop of the

play13:22

price to attain 50% accuracy on the math

play13:25

benchmark over just 2 years and for

play13:28

comparison a computer science PhD

play13:30

student who didn't particularly like

play13:32

math scored 40% so this is already quite

play13:35

good and the inference efficiency

play13:37

improved by nearly three orders of

play13:39

magnitude or 1,000x in less than 2 years

play13:42

so what we have here is something that

play13:44

is incredibly more efficient for the

play13:46

same result in Just 2 years that is

play13:48

absolutely incredible these algorithmic

play13:51

efficiencies are going to drive a lot

play13:52

more gains than you think and as someone

play13:54

who was looking at arxiv which is where

play13:56

a lot of these research papers get

play13:58

published just trust me there are like

play14:00

probably 50 to 80 different research

play14:02

papers that get published every single

play14:04

day and a few of those allow you know 10

play14:06

to 20% gain 30% gain and if you

play14:09

calculate the fact that all of these

play14:11

algorithmic efficiencies are going to

play14:13

compound against each other we're really

play14:14

going to see more cases like this here's

play14:17

where you talk about the API cost and

play14:19

you basically look at how efficient it

play14:21

becomes to run these models so GPT 4 on

play14:23

release cost to save at gpt3 when it was

play14:26

released but since the GPT 4 released a

play14:28

year ago the prices for gbt 4 level

play14:30

models have fallen six times/ four times

play14:33

for the input/output with a release of

play14:35

gbt 40 and gbt 3.75 level is basically

play14:40

Gemini 1.5 Flash and this is 85 times

play14:43

cheaper than what we previously used to

play14:46

have so we can see here on this graph

play14:48

that if we want to calculate exactly how

play14:50

much progress we're going to be make we

play14:53

we can clearly see that there are two

play14:54

main things here which is of course the

play14:56

physical compute of scaling which is

play14:58

going to be things like these data

play14:59

centers and the hardware that we throw

play15:02

at the problems and then of course the

play15:03

algorithmic progress which is going to

play15:05

be the efficiencies where people rewrite

play15:07

these algorithms in crazy ways that just

play15:10

drive efficiencies that we previously

play15:11

didn't know how to solve and that's why

play15:13

in the future where we do get an

play15:15

automated AI researcher to do that this

play15:18

Gap is going to widen even more now this

play15:20

is where we talk about un hobbling this

play15:22

is of course something that we just

play15:23

spoke about before but the reason that

play15:25

this is important is because this is

play15:27

where you can get gains from a model in

play15:29

ways that you couldn't previously see

play15:31

before so imagine if when someone asked

play15:33

you you know a math problem you had to

play15:34

instantly answer with the first thing

play15:36

that came to mind it seems pretty

play15:38

obvious that you would have a very hard

play15:40

time except for the simplest problems

play15:42

but until recently that's how we had llm

play15:44

solve math problems instead those of us

play15:46

when we do math problems we work the the

play15:48

problem step by step and able to solve

play15:51

much more difficult problems that way

play15:53

it's basically Chain of Thought and

play15:54

that's what we do for llms and despite

play15:56

the excellent raw capabilities they were

play15:59

much worse at math than they could be

play16:00

because they were hobbled in an obvious

play16:02

way and it was a small algorithmic tweak

play16:05

that unlocked much greater capabilities

play16:07

essentially what he's stating here is

play16:09

that when these even better models get

play16:11

even more un hobbled we're going to see

play16:13

even more compounded gains overall and

play16:16

one of the craziest ones that we

play16:17

recently have is of course GPT 4 can

play16:19

only solve the software engineering

play16:21

bench 2% correctly while with Devon's

play16:24

agent scaffolding it jumps to

play16:27

142% which is pretty pretty incredible

play16:30

and this is something that is very very

play16:33

small in terms of its infancies and it

play16:35

says tools imagine if humans weren't

play16:37

allowed to use calculators or computers

play16:39

we're literally only at the beginning

play16:40

here chpt can only now use a web browser

play16:43

run some code and so on and of course

play16:45

this is where we talk about the context

play16:47

length which is you know it's gone from

play16:48

a 2K context length to 32 to literally a

play16:51

1 million context length and of course

play16:54

there's posttraining which is

play16:55

substantially improving the models after

play16:58

you've trained the model which is making

play17:00

huge gains we went from 50% to 72% on

play17:03

math and 40% to 50% on the GP QA and

play17:07

here's where we can see once again there

play17:09

is another stack in terms of the growth

play17:11

so you can see here the raw chatbot from

play17:14

the chatbot agent these are things that

play17:16

most people just aren't factoring in

play17:18

when we take a look at the future of AI

play17:20

growth this is why and this is why he

play17:23

says the improvements will be step

play17:24

changes compared to GPT 6 and

play17:27

reinforcement learning with human

play17:28

feedback by 2027 rather than a chatbot

play17:31

you're going to have something that

play17:33

looks more like an agent and more like a

play17:35

coworker now one of the craziest things

play17:37

I saw here was that when you take in all

play17:39

of the information that was just stated

play17:42

this is absolutely incredible because he

play17:44

basically says that by the end of 2027

play17:46

this is absolutely insane so we can see

play17:48

the gains made from gpt2 to GPT 4 by

play17:51

physical compute algorithmic

play17:52

efficiencies plus major un hobbling

play17:54

gains from the base model chatbot in the

play17:56

subsequent four years we're going to see

play17:58

3 to six orders of magnitude of Base

play18:01

effective compute scale up which is the

play18:02

physical compute and algorithmic

play18:04

efficiencies but basically he says that

play18:07

with all of this combined what this

play18:09

should look like suppose that GPT 4

play18:11

training took 3 months in 2027 a leading

play18:15

AI lab will be able to train a GPT 4

play18:19

level model in a minute that is a

play18:21

incredible prediction and I'm wondering

play18:24

if that is going to be true but then

play18:26

again you have to think about it in 3

play18:28

years with billions of dollars and that

play18:30

much more compute floating around the

play18:31

industry I wouldn't be surprised if some

play18:33

of the things that we think right now

play18:35

are sci-fi completely aren't so here's

play18:37

where we can see everything visualized

play18:39

we can literally see how we have the

play18:40

base knowledge then we've got the

play18:42

chatbot framework then once we have the

play18:44

agentic framework and of course once we

play18:46

have the more orders of magnitude we can

play18:49

see that the intelligence by 2027 become

play18:52

an an automated AI research engineer now

play18:55

that you actually look at all the

play18:56

information from every different single

play18:58

point it doesn't seem that crazy like if

play19:01

you count everything every single thing

play19:04

into account this doesn't seem like

play19:06

something that is too too far away

play19:08

especially with the kind of jumps that

play19:09

we've seen before and maybe just maybe

play19:12

with the GPT 5 release and subsequent AI

play19:14

models we're going to start to see the

play19:16

2027 and 2026 are going to be incredible

play19:19

time periods he says we are on course

play19:22

for AGI by 2027 and that these AI

play19:25

systems will basically be able to

play19:26

automate basically all all cognitive

play19:29

jobs think any job that can be done

play19:32

remotely that is a crazy crazy statement

play19:35

and I think that that is something that

play19:37

you need to you know bear in mind AGI by

play19:39

2027 is not something that is out of the

play19:42

picture and it's something that

play19:43

definitely could happen so one of the

play19:45

most interesting things as well that I

play19:47

think is really important is that the

play19:49

reason this period is so important is

play19:52

because this is the decisive period this

play19:55

is the period where the growth occurs

play19:57

and we really get to see what is capable

play19:59

so he says right here in essence we're

play20:02

in a middle of a huge scale up and this

play20:04

is where we are reaping one time gains

play20:06

this decade and progress through the

play20:08

orders of magnitude will be multiples

play20:11

slower thereafter if this scale up

play20:14

doesn't get us to AGI in the next 5 to

play20:16

10 years it might be a long way out so

play20:20

the reason that this is going to be you

play20:21

know so interesting is because it's this

play20:24

decade or bus so you can see right here

play20:27

that the effective scale up of compute

play20:29

is going to become harder and harder the

play20:31

larger it gets because think about it

play20:33

like this it's like well I don't

play20:35

actually have a great example but we

play20:37

just have to think about how hard it is

play20:39

to invest billions and billions of

play20:41

dollars more to scale up systems even

play20:43

more so it's like of course you can

play20:45

scale up a model from 10 billion to 100

play20:48

million but the scale from 100 million

play20:50

to 10 billion is really huge it takes a

play20:53

lot of investment you're going to have

play20:54

to Data Centers multiple data centers

play20:57

you're going to have to make them really

play20:58

huge you're going to have to think about

play20:59

all the cooling there's a lot of power

play21:01

requirements and then to get to a

play21:03

trillion dollar clusters or even hundred

play21:05

billion doll even $500 billion clusters

play21:08

that's even more incredible so basically

play21:10

he's stating once we get to the100

play21:12

billion level and above if we aren't at

play21:16

AGI at that level then it means that

play21:19

real realistically we're going to have

play21:21

to wait for some kind of algorithmic

play21:23

breakthrough or an entirely new

play21:24

architecture because with the gains that

play21:27

are going to be made by by being able to

play21:29

have so much more compute being thrown

play21:32

at the problem it is very hard for us to

play21:34

make gains based on compute after that

play21:37

he basically says here that spending a

play21:39

million dollars on a model used to be

play21:41

outrageous but by the end of the decade

play21:43

we will likely have $100 billion or $1

play21:46

trillion clusters and going much higher

play21:48

than that is going to be a lot harder so

play21:51

it's going to be basically the feasible

play21:52

limit both in terms of what big

play21:54

businesses can actually afford and even

play21:56

just as a fraction of the GDP and he

play21:58

also states that the large gains that

play22:01

we're getting from CPUs to gpus will be

play22:03

likely gone by the end of the decade

play22:05

because we're going to have ai specific

play22:07

chips and without much further Beyond

play22:09

more ZW there's not going to be much

play22:11

more gains possible and the reason this

play22:13

is important because for those of you

play22:15

who are trying to navigate this entire

play22:17

thing you're trying to figure out okay

play22:18

where AI capabilties going to stop where

play22:21

is the next growth going to be at it's

play22:23

basically the fact that right now we're

play22:25

scaling up our systems and once we reach

play22:27

the top of $1 billion to 100 trillion

play22:30

clusters if we don't have super

play22:31

intelligence or AGI by that limit then

play22:33

we'll know that maybe we're using the

play22:35

wrong architecture and things are going

play22:37

to have to change significantly so it's

play22:39

either going to be a long slow slug or

play22:42

we're going to get there relatively soon

play22:45

and by the looks of things it looks like

play22:46

we're going to get there relatively soon

play22:48

now here's where we talk about AGI to

play22:50

Super intelligence the intelligence

play22:52

explosion and basically this is where he

play22:54

talks about how AI progress will not

play22:56

stop at human level hundreds of millions

play22:59

of agis could automate AI research

play23:02

compressing a decade of algorithmic

play23:05

progress which adds five orders of

play23:07

magnitudes into one year and we would

play23:10

rapidly go from Human level to vastly

play23:12

superhuman AI systems and the power and

play23:15

the Peril of super intelligence would be

play23:17

dramatic so here's what we have

play23:19

basically I think the most important

play23:21

graph okay if there's one that you want

play23:22

to screenshot and keep on your phone I

play23:24

think it's this one okay the reason that

play23:26

it is is because once you have the GPT 4

play23:30

gpt3 gpt2 timelines mapped out we can

play23:34

clearly see that this intersection here

play23:36

is at 2023 but of course as the trends

play23:39

continue we can see that once we do get

play23:41

to this period right here this is where

play23:44

things start to get interesting because

play23:46

this is of course the period of

play23:47

automated AI research and that's why

play23:50

once this does happen and this is not

play23:52

something that's like a fairy tale this

play23:53

is something that Sam mman has said

play23:56

that's his entire goal that's what

play23:57

opening eye are trying to build they're

play23:59

not really trying to build super

play24:00

intelligence but they Define AGI as a

play24:03

system that can do automated AI research

play24:05

and once that does occur and I don't

play24:07

think it's going to take that long

play24:09

that's when we're going to get that

play24:10

recursive self-improvement Loop where

play24:13

super intelligence is not going to take

play24:15

that long after because if you can

play24:17

deploy 5,000 agents okay that are

play24:20

essentially all super intelligent not

play24:22

super intelligent but at the level of a

play24:25

standard AI researcher and we can deploy

play24:27

them on certain problems and keep them

play24:29

running 24/7 that is going to just

play24:33

compress years of AI Research into a

play24:36

very short time frame which is why you

play24:38

can see that the graph during this

play24:39

purple period here it starts to go up

play24:42

rapidly and that's why the next decade

play24:45

is so important because once this area

play24:47

actually happens once we get to that

play24:49

breakthrough level where okay we've

play24:51

automated AI research then all bets are

play24:53

off because we know the super

play24:55

intelligence will just be around the

play24:57

corner and that's what we have the

play24:59

intelligent explosion because every time

play25:01

an AI researcher manages to make a

play25:03

breakthrough the AI research

play25:05

breakthrough is an applied to that AI

play25:07

researcher and then the progress

play25:09

continues again because now the AI

play25:11

researcher is just that more efficient

play25:13

or even smarter and here's the crazy

play25:15

thing this is one of the craziest

play25:16

implications about this entire thing we

play25:19

don't need to automate everything just

play25:21

AI research I'm I say that again we

play25:23

don't need to automate everything it's

play25:25

just AI research a common objection to

play25:27

transformative impacts of AGI is that it

play25:29

will be hard for AI to do everything

play25:31

look at robotics for instance the

play25:33

doubters say that there will be a gnarly

play25:35

problem even if AI is cognitively at the

play25:37

level of phds or take automating biology

play25:39

research and design which might require

play25:41

lots of physical lab work and human

play25:43

experiments but we don't actually need

play25:45

robotics we don't need many things for

play25:47

AI to automate AI research the jobs of

play25:49

AI researchers and engineers at leading

play25:51

Labs can be done fully virtually and

play25:54

don't run into real world botton necks

play25:55

the same way that robotics does and of

play25:57

course this is going to still be limited

play25:59

by compute which is addressed later and

play26:01

basically that's things whereby like the

play26:03

literal hardware issues that you get

play26:04

when you're trying to scale these

play26:05

systems like it's not that hard well I

play26:08

say it's not that hard but theoretically

play26:10

it should be easier to read ml

play26:12

literature and come up with new

play26:14

questions and ideas Implement these

play26:15

experiments test those ideas interpret

play26:18

the results and then of course repeat it

play26:20

and all it takes is for for once we get

play26:22

to that level that's where we have this

play26:24

insane feedback loop and this is where

play26:26

2027 we should expect GPU fleets in the

play26:29

tens of millions training clusters alone

play26:32

approaching three augites larger already

play26:35

putting us at 10 million a100

play26:37

equivalents and this is going to be

play26:39

running millions of copies of our

play26:41

automated AI researchers perhaps 100

play26:44

million human researcher equivalents

play26:46

running day and night that is absolutely

play26:50

incredible and of course some of the

play26:52

gpus are going to be you know used for

play26:53

training new models but just think about

play26:55

that guys imagine 100 million human

play26:58

researcher equivalent running

play27:01

247 what kind of breakthroughs are going

play27:04

to be made at that stage I mean it's

play27:06

very hard to conceptualize but it's

play27:08

important to take into account what is

play27:10

truly coming because like he said

play27:13

nobody's really pricing this in and the

play27:15

crazy thing is is that they're not going

play27:16

to be working at human speed they're

play27:18

going to be each working at 100 times

play27:20

human speed not long after we begin

play27:22

being able to automate AI research so

play27:25

think about it you're going to have like

play27:26

100 million more AI research and they're

play27:28

going to be working at 100 times what

play27:31

you are which is absolutely incredible

play27:33

they're going to be able to do a Year's

play27:35

worth of work in a few days that is

play27:37

going to be absolutely insane and you

play27:39

have to remember like the current level

play27:41

of breakthroughs that we're getting with

play27:43

just humans is absolutely incredible so

play27:46

once we're able to automate it the

play27:48

intelligence explosion is literally

play27:50

going to be unfathomable now this is one

play27:52

of the bottlenecks that most people

play27:53

don't talk about but of course it's

play27:55

limited compute and whilst yes now

play27:58

you're probably thinking wow this is

play27:59

really incredible we could really be on

play28:01

the you know Cliff of something amazing

play28:03

here but of course compute is still

play28:06

going to be limited then there's also

play28:08

this idea which I think most people

play28:09

haven't considered and this includes

play28:11

myself okay ideas could get harder to

play28:13

find and there are diminishing returns

play28:15

though the intelligence explosion will

play28:17

quickly fizzle related to the above

play28:19

objection even if the automated AI

play28:21

researchers lead to an initial burst of

play28:23

progress whether rapid progress can be

play28:26

sustained depends on the shape of of the

play28:28

diminishing returns curve to algorithmic

play28:30

process again my best read of the

play28:33

empirical evidence is that the exponents

play28:35

shake out in favor of the explosive SL

play28:37

accelerating progress in any case the

play28:40

sheer size of the one-time Boost from

play28:42

100 to hundreds of millions of AI

play28:44

researchers probably overcomes

play28:46

diminishing returns here for at least a

play28:48

good number of organites orders of

play28:51

magnitudes of algorithmic progress even

play28:53

though it can't be indefinitely self-

play28:54

sustaining basically there are few

play28:56

things that could slow down AI progress

play28:58

but this is of course something that's

play28:59

far far into the future so here's where

play29:01

he talks about the takeoff for AGI so he

play29:04

said rather that 2027 is Agi and then we

play29:07

get to Super intelligence which is a

play29:09

very basic look at things it's probably

play29:10

going to look like this 2026 to 2027 we

play29:13

get a Proto automated engineer but it

play29:16

has blind spots in other areas and it's

play29:18

able to speed up work by 1.5 times to

play29:20

two times and already progress begins

play29:23

accelerating then of course in 2027 to

play29:26

2028 we have Proto automated researchers

play29:29

that can automate more than 90% and some

play29:32

remaining human bottlenecks and hiccups

play29:34

in coordinating a giant organization of

play29:36

automated researchers to be worked out

play29:39

but this already speeds up progress by

play29:40

three times and then of course now with

play29:43

AGI and these kind of researchers we get

play29:46

10 times the pace of progress in 2029

play29:48

and that's how we get to Super

play29:49

intelligence and this is thinking about

play29:52

it as a slow method to Super

play29:54

intelligence but the point is is that

play29:57

that is ladies and and gentlemen still

play29:59

very very fast he talks about how by the

play30:02

end of this decade the AI that we're

play30:04

going to have are going to be

play30:05

unimaginably powerful meaning that even

play30:08

things that you can think of right now

play30:09

it's going to be pretty hard to

play30:11

conceptualize how great they're going to

play30:13

be now he gives a really interesting you

play30:15

know description of how this could

play30:17

actually happen but it's pretty

play30:19

incredible to think about like he says

play30:21

they'll be able to run a civilization of

play30:23

billions of them and they're going to be

play30:24

thinking orders of magnitude faster than

play30:26

humans they'll be able to quick Master

play30:28

any domain write trillions lines of code

play30:31

and read every research paper in every

play30:33

scientific field ever written and write

play30:35

new ones before you've gotten past the

play30:37

abstract of one learn the parallel

play30:39

experience of every one of its copies

play30:41

gain billions of human equivalent years

play30:43

of experience with some new innovation

play30:45

in a matter of weeks and work 100% of

play30:47

the time with Peak energy and focus and

play30:50

won't be slowed down by that one team

play30:52

mate who is lagging and so on and of

play30:53

course we've already seen some of these

play30:55

examples things that like you know

play30:57

people talk about in terms of you know

play30:59

AI research in the future this is

play31:01

something that we have seen already

play31:03

before if we take a look at the famous

play31:05

move 37 in Alpha go this is basically

play31:07

where a computer system did a move in a

play31:10

game that was really old and people were

play31:12

like why on Earth did the AI system do

play31:14

that move I'm guaranteeing that it just

play31:15

lost but it the move that it pulled I

play31:18

think the calculation that it would have

play31:20

done that move was pretty crazy and this

play31:23

system basically thought of a move that

play31:25

no one would have ever thought of and

play31:27

this move stunned people it shocked them

play31:29

you know Lisa all couldn't really figure

play31:31

out what was going on the human player

play31:32

had no idea what the AI system did and

play31:35

eventually the human lost that game and

play31:37

basically he's stating that super

play31:39

intelligence is going to be like this

play31:40

across many domains it's going to be

play31:42

able to find exploits in human code too

play31:44

subtle for humans to notice and it's

play31:46

going to be able to generate code too

play31:48

complicated for any human to understand

play31:50

even if the model spent decades trying

play31:52

to explain it we're going to be like

play31:53

high schoolers stuck on neonian physics

play31:56

while it's off exploring mechanics and

play31:59

imagine all of this applied to all

play32:01

domains of science technology and the

play32:03

economy of course the era bars here are

play32:06

still extremely large but just imagine

play32:08

how consequential this would all be of

play32:10

course one of the big things is about

play32:12

solving robotics superintelligence is

play32:14

not going to stay cognitive for long

play32:15

once we do get you know the systems that

play32:18

are at AGI level factories are going to

play32:20

shift from going to human run to AI

play32:23

directed using human physical labor soon

play32:25

to be fully being run by swarms of human

play32:28

level robots and of course think about

play32:30

it like this the 2030s to 2040s is going

play32:33

to be absolutely insane because the

play32:35

research and design efforts that human

play32:36

researchers would have done in the next

play32:38

Century into years so think about how we

play32:41

went from the 20th century when we were

play32:43

essentially you know going from flying

play32:46

as a mirage like people were like ah

play32:47

we're never going to be able to fly then

play32:49

we had airplanes then we had a man on

play32:51

the moon which was over a couple of

play32:53

years you know over tens and you know

play32:55

over like 50 40 30 20 years but in the

play32:59

2030s this is going to be happening in a

play33:01

few years like literally just a short

play33:03

amount of years we're going to be having

play33:04

different breakthroughs across many

play33:06

different sectors and many different

play33:07

Technologies across many different

play33:10

Industries and this is where we can see

play33:11

the doubling time of the global economy

play33:13

in years from 1903 it's been 15 years

play33:17

but after super intelligence what

play33:19

happens is it going to be every 3 years

play33:20

is it going be every five is it going to

play33:22

be every year is it going to be every 6

play33:23

months I mean how crazy is the growth

play33:25

going to be because we've seen that here

play33:28

like the exponential decreases in time

play33:30

are very very hard to predict here are

play33:32

two of the most important things as well

play33:34

that I think are really really important

play33:37

and I know this video is really long but

play33:39

guys trust me this is literally probably

play33:41

the last Industrial Revolution that's

play33:43

ever going to happen and it's something

play33:44

that we are being able to witness here

play33:47

with these documents so there are two

play33:49

things okay this is a decisive and

play33:51

overwhelming military Advantage early

play33:54

cognitive super intelligence might be

play33:56

enough here per perhaps some superhuman

play33:59

hacking scheme can deactivate adversary

play34:02

militaries in any case military power

play34:04

and Technology progress have been

play34:06

tightly linked historically and with

play34:08

extraordinarily rapid technological

play34:11

progress will come military revolutions

play34:14

and essentially the Drone swarms things

play34:16

that you could do all the kinds of

play34:17

research and design that you could do to

play34:19

create weapons that you couldn't even

play34:20

think about it's going to be absolutely

play34:22

incredible basically think about it like

play34:24

this with superintelligence compare 21st

play34:27

Century militaries with fighter jets and

play34:30

tanks and air strikes fighting a 19th

play34:33

century Brigade of horses and bayonets

play34:35

that's going to be a war that they

play34:37

simply can't win the technology that we

play34:39

have you'd only need an F22 fighter jet

play34:41

to annihilate the entire 19th century

play34:44

Brigade and the same is going to happen

play34:46

with superintelligence the research and

play34:48

design efforts are going to create an

play34:50

potentially an unstable economy where if

play34:52

we don't get to superintelligence First

play34:54

and a nation state that is I guess you

play34:56

could say on the side of doing whatever

play34:58

they want they could have technologies

play35:00

that are so far Advanced that they could

play35:02

truly have military advantage over

play35:04

everyone and this is why I think things

play35:06

are going to change open AI okay whoever

play35:09

controls superintelligence will possibly

play35:12

have enough power to seize control from

play35:14

pre superintelligence forces even

play35:16

without the robots small civilization of

play35:18

superintelligence would be able to hack

play35:21

any undefended military election

play35:23

television system and cunningly persuade

play35:26

generals electoral

play35:28

and economically out compete nation

play35:30

states design new synthetic bioweapons

play35:32

and then pay a human in Bitcoin to

play35:34

synthetically synthesize it and so on

play35:37

and basically what we're going to have

play35:38

here is I think there's going to be a

play35:40

shift of power guys I don't know how the

play35:42

government is going to deal with this

play35:44

like if they're just going to seize

play35:45

opening eyes computers or whatever but

play35:47

whoever literally gets to Super

play35:49

intelligence first I truly believe that

play35:51

all bets are off because if you have the

play35:53

cognitive abilities of something that is

play35:55

you know 10 to 100 times smarter than

play35:56

you trying to to outm smarten it it's

play35:58

just you know it's just not going to

play35:59

happen whatsoever so you've effectively

play36:01

lost at that point which means that

play36:03

you're going to be able to overthrow the

play36:05

US government so I mean um it's a pretty

play36:08

pretty interesting statement but I do

play36:10

think that it is true and this is where

play36:11

you can see that the moment we get an

play36:13

automated AI researcher all of these

play36:15

other areas start to take off in

play36:17

remarkable different ways it's truly

play36:19

incredible now here's where we get to an

play36:21

interesting point okay this is where he

play36:23

talks about the security for AGI and

play36:25

this is really important because after

play36:27

made this document open AI actually

play36:29

updated their web page with something as

play36:31

a rebuttal to this part right here and

play36:34

like I said before this is why I truly

play36:36

think that at least starting next year I

play36:39

don't think there were going to be any

play36:41

AI leaks after 2025 and that's because I

play36:44

think the nature of AI is going to

play36:47

change because they're probably going to

play36:48

realize how serious AI is and the fact

play36:52

that this is going to be treated like I

play36:54

guess you could say a US national secret

play36:56

in the sense that like we just don't get

play36:58

secrets about the Pentagon unless we

play37:00

have a whistleblower who is eventually

play37:02

going to get arrested anyways and

play37:04

essentially it says the Nations leading

play37:06

AI laabs treat security as an

play37:08

afterthought currently they're basically

play37:09

handing the key secrets for AGI to the

play37:12

CCP on a silver platter securing the AGI

play37:15

secrets and waits the state actor threat

play37:17

will be an immense effort and we're not

play37:19

on track basically as stating that look

play37:21

if we're actually going to build super

play37:22

intelligence here and we're actually

play37:24

going to build something that is really

play37:25

going to change the world we need to get

play37:27

ser serious about our security right now

play37:29

there are so many loopholes in our

play37:31

current top AI Labs that we could

play37:33

literally have people who are

play37:34

infiltrating these companies and there's

play37:36

no way to even know what's going on

play37:38

because we don't have any true security

play37:41

protocols and the problem is is that

play37:43

it's not being treated as seriously as

play37:45

it is it's not like it's the CIA or some

play37:48

secret government organization where

play37:50

they have things going on at the

play37:51

Pentagon or like Area 51 or whatever

play37:54

secret military organization exists that

play37:57

have super clear things in regards to

play37:59

their security and he's basically

play38:01

stating that right now you don't even

play38:03

need to mount a dramatic Espionage

play38:04

operation to steal these secrets just go

play38:07

to any San Francisco party or look

play38:09

through office windows and he's

play38:11

basically stating that right now it's

play38:13

not as serious because people don't

play38:15

realize it but the thing is and like I

play38:17

said before AI labs are develop

play38:19

developing currently algorithmic Secrets

play38:21

which are the key technical

play38:23

breakthroughs which are the blueprints

play38:24

so to speak for AGI right now and in

play38:27

particular the the it's basically the

play38:30

next Paradigm for the next level of

play38:32

systems and of course basically what we

play38:34

need to do is we need to protect these

play38:36

algorithmic secrets if we're supposed to

play38:38

maintain this lead and of course secure

play38:41

the weights of the models that we need

play38:43

and they're going to matter more when we

play38:46

get these larger custers and he says our

play38:48

failure today will be irreversible soon

play38:51

in the next 12 to 24 months we will leak

play38:53

key AGI breakthroughs to the CCP it will

play38:56

be to the National security

play38:58

establishment the greatest regret before

play39:00

the decade is out this is of course the

play39:02

preservation of the three World against

play39:04

the authoritarian States and it's on the

play39:07

line a healthy need will be the

play39:09

necessary buffer that gives us the

play39:10

margin to get AI safety right to the

play39:13

United States has an advantage in the

play39:15

AGI race but we're going to give up this

play39:17

lead if we don't get serious about

play39:18

security very soon and if we don't get

play39:21

this right we need to ensure that we do

play39:23

now to ensure that AGI goes very well

play39:26

and I do agree with that because if

play39:27

we're not going to get this right other

play39:29

countries could try and Rush forward

play39:31

ahead with the technology so that they

play39:33

can you know Advance their research and

play39:35

design effort in the military so that

play39:36

can gain a military advantage and what

play39:39

happens if there's some kind of security

play39:41

error where those systems go off the

play39:43

rails I mean it's truly going to be

play39:45

incredible and he says too many Spar

play39:47

people underestimate Espionage the

play39:49

capabilities of states and their

play39:51

intelligence agencies are extremely

play39:53

formidable even in a normal non allout

play39:56

AGI race times and from Little that we

play39:58

know publicly nation states or even less

play40:01

Advanced actors have been able to zero

play40:03

click hack any desired iPhone and a Mac

play40:06

with just a phone number infiltrate an

play40:08

air gapped aut topic weapons program

play40:11

modify the Google source code find

play40:14

dozens of zerod day exploits a year that

play40:18

take on average 7 years to detect

play40:20

Spearfish major tech companies install

play40:23

key loggers on an employee device insert

play40:25

trap doors in encryption schemes still

play40:27

information I mean he's basically

play40:29

stating that look if little less

play40:32

Advanced actors can do this okay and

play40:34

this is just the stuff that we know

play40:36

publicly imagine what you know people

play40:38

are probably planning for the race for

play40:40

AGI like imagine what is really going on

play40:42

behind closed doors in order to get the

play40:44

system because guys AGI is basically a

play40:46

race to it first and whoever gets the

play40:48

super intelligence first truly does win

play40:50

like I want to make that clear and he's

play40:52

basically stating here that look we need

play40:53

to protect the model weights especially

play40:56

as we get close to AG GI but this is

play40:58

going to take years of preparation and

play40:59

practice to get right and of course we

play41:02

need to protect the algorithmic secrets

play41:04

starting yesterday's basically explains

play41:06

here that the model Waits are just a

play41:07

large files of numbers on a server and

play41:10

these can be easily stolen all it takes

play41:12

is an adversary to match your trillions

play41:14

of dollars and your smartest minds of

play41:16

Decades of work just to steal this file

play41:18

and imagine if the Nazis has gotten an

play41:20

exact duplicate of every atomic bomb

play41:22

made in Los Alamos Los Alamos was that

play41:25

secret area where people were developing

play41:28

the atomic bomb and he's basically

play41:29

saying that look imagine the stuff from

play41:31

the atomic bomb had gotten to the Nazis

play41:34

imagine what the future would look like

play41:35

that is not a future we do want to

play41:38

create for ourselves so we need to make

play41:40

sure we keep the model weight secure or

play41:42

otherwise we're building AGI for any

play41:44

other nation state even possibly North

play41:47

Korea he's basically stating that look

play41:49

this is a serious problem because all

play41:50

they need to do is automate AI research

play41:53

build super intelligence and any lead

play41:55

that the US had would vanish the power

play41:58

dynamics would shift immediately and

play42:01

they would launch their own intelligence

play42:03

explosion what would a future look like

play42:05

if the US is no longer in the lead and

play42:07

then of course this is a problem because

play42:09

if we find out that they also have the

play42:11

same secrets that we do this is going to

play42:13

put us existential race which means that

play42:15

the margin for ensuring the

play42:17

superintelligence is safe is going to

play42:19

completely disappear and we know that

play42:20

other countries are going to immediately

play42:22

try and race through this Gap where

play42:24

they're going to skip all the safety

play42:25

precautions that any responsible us AGI

play42:28

effort would hope to take which is why I

play42:30

said once people start to think wait a

play42:32

minute this is truly the stake of

play42:34

humanity right here we need to make sure

play42:36

that okay we secure everything down and

play42:38

I'm sure that we're not going to get any

play42:40

more leaks so now this is where open AI

play42:43

literally yesterday published securing

play42:45

research infrastructure for advanced AI

play42:48

we outline our architecture that

play42:49

supports the secure training of Frontier

play42:51

models and basically they say we're

play42:53

sharing some high level details on the

play42:55

security architecture of our research

play42:57

supercomputers open AI operates some of

play43:00

the largest training AI training

play43:01

supercomputers enabling us to deliver

play43:03

models that are industry-leading in both

play43:05

capabilities and safety while advancing

play43:08

the frontiers of AI and they're stating

play43:10

that we prioritize security basically

play43:12

through this they detail certain ways

play43:14

that they have Security in terms of of

play43:16

course protecting the model weights and

play43:18

the stating that protecting the model

play43:19

weights from exfiltration from the

play43:21

research environment requires a defense

play43:23

indepth approach that encompasses

play43:26

multiple layers of security these

play43:28

bespoke controls are tailored to

play43:30

safeguard our research assets against

play43:31

unauthorized access and theft while

play43:34

ensuring they remain accessible for

play43:35

research and development purposes now I

play43:38

think they did this because open ey I

play43:40

don't think they want the government to

play43:42

come in and say look we need to like

play43:43

have people in here to make sure that

play43:45

you guys know what you're doing but I do

play43:47

think that in the future there's going

play43:48

to be some kind of government

play43:49

intervention because openi has literally

play43:52

been a company that has been so

play43:54

tumultuous that it is shocking at what

play43:57

has gone on I mean the CEO is fired

play43:59

certain researchers left certain

play44:00

researchers were fired some people are

play44:02

leaving saying that this company's got

play44:03

not good for safety you have some people

play44:05

saying this is happening about AGI it's

play44:07

going to be next year I mean for a

play44:09

company that is literally the most

play44:10

advanced AI company in the world there

play44:12

is so much drama that has gone on that

play44:14

it doesn't bolster the most trust for

play44:17

the general public in terms of what

play44:19

they're going to be doing with regards

play44:20

to securing the model weight and in

play44:23

addition there are currently literal

play44:25

people on Twitter like Jimmy Apple that

play44:27

know when future releases are coming so

play44:30

how like on Earth is this even a thing

play44:32

because I think there were even some

play44:33

tweets about how certain people were

play44:35

taking pictures of laptops that were in

play44:37

caf's just just near opening eyes uh

play44:40

research lab and essentially that's how

play44:42

they were getting the leaked info so I'm

play44:44

guessing that maybe some open ey

play44:45

employees may have just left their

play44:46

laptops open or maybe someone was taking

play44:49

you know screenshots of what was going

play44:51

on on their laptops at cafes just

play44:53

outside openi headquarters and it's

play44:56

basically stuff like this thinking about

play44:58

like what's going on here is that like

play44:59

they need serious serious security

play45:01

because if they are really on the path

play45:03

to AGI that means they're on the path of

play45:04

super intelligence which holds a huge

play45:07

huge huge implications for the future

play45:10

and of course the last part of this is

play45:11

where he talks about super intelligence

play45:13

and aligning this reliably controlling

play45:15

AI systems much smarter than we are is

play45:17

an unsolved repeat unsolved technical

play45:21

problem and while it is a solvable

play45:23

problems things could very easily go off

play45:25

the rails during a rap intelligence

play45:27

explosion and managing this will be

play45:29

extremely tensed and failure could be

play45:32

catastrophic basically saying that look

play45:34

if we make something that is 10 times

play45:36

smarter than us think about how we how

play45:38

much smarter than we are from chimps

play45:40

we're not that much smarter in terms of

play45:42

the uh you know IQ but the fact that

play45:45

we're you know just a little bit more

play45:47

smarter than them and we've been able to

play45:48

do so much more it shows us that look

play45:51

you don't need to create something

play45:52

that's a million times smarter than you

play45:53

to realize that it could screw you over

play45:56

and do things that you're not truly

play45:58

going to understand and of course this

play45:59

is someone that literally worked on

play46:01

super alignment at open AI so this isn't

play46:03

just a random blog post and here's where

play46:05

the real problem lies okay by the time

play46:08

the decad is out we're going to have

play46:09

billions of vastly superhuman a AI

play46:12

agents running around and these

play46:13

superhuman AI agents will be capable of

play46:16

extremely complex and Creative Behavior

play46:18

we will have no hope of following along

play46:20

we'll be like first graders trying to

play46:22

supervise with multiple doctorates in

play46:24

esset we're going to face the problem of

play46:26

handing off Trust how do we trust that

play46:28

when we tell an AI agent to go and do

play46:30

something it's going to do that with our

play46:32

best thoughts in mind this is

play46:34

essentially the alignment problem we're

play46:36

not going to have any hope of

play46:37

understanding what our billion super

play46:39

intelligences are actually doing even if

play46:41

they try and explain it to us because

play46:42

we're not going to have the technical

play46:44

ability to reliably guarantee even basic

play46:46

side constraints for these systems and

play46:48

he's basically stating that look

play46:50

reinforcement learning with human

play46:51

feedback relies on humans being able to

play46:53

understand and supervise AI Behavior

play46:55

which fundamentally won't Skil a

play46:57

superhuman system because this relies on

play47:00

us being able to actually understand and

play47:03

supervise a behavior which means we need

play47:05

to actually understand what's going on

play47:07

and if we don't understand what's going

play47:08

on then we can't reliably supervise

play47:11

these systems which means it's not going

play47:13

to scale to superhuman systems and the

play47:15

craziest thing is is that remember last

play47:17

week open AI literally disbanded its

play47:19

super alignment team here is a nice

play47:21

illustration where you can see the

play47:22

little AI giving us a very very basic

play47:25

piece of code and of course we can EAS

play47:27

understand that that looks safe but here

play47:29

we're like wait a minute what is all

play47:30

this stuff is this safe what's going on

play47:31

it's like you know it's very hard to

play47:33

interpret what on Earth is going on in

play47:35

addition we can see here that some of

play47:37

the problems that may occur are ones

play47:39

that we may not want so of course if we

play47:41

think about you know getting a base

play47:43

model to you know make money by default

play47:46

it may well learn to lie to commit fraud

play47:48

to deceive to hack to seek power because

play47:50

in the real world people actually use

play47:52

this to make money and of course we can

play47:54

add the side constraints such as don't

play47:56

lie and don't break the law but we're

play47:57

not going to be able to understand what

play47:58

they're doing and therefore we won't be

play48:00

able to penalize the bad behavior and if

play48:03

we can't add these side constraints it's

play48:05

not clear what's going to happen and

play48:07

even maybe they'll learn to behave

play48:08

nicely when humans are looking and then

play48:10

pursue more nefarious strategies when we

play48:12

aren't watching which is a real real

play48:14

problem and this is something that

play48:16

actually does occur already one of the

play48:18

main things that I genuinely think about

play48:20

on a daytoday basis is this right here

play48:23

okay um it says what's more I expect

play48:26

that within a small number of years

play48:27

these AI systems will be integrated into

play48:29

many critical systems including military

play48:31

systems and failure to do so okay this

play48:34

is why it's such a trap which is why

play48:36

like we're on this train barreling down

play48:38

this pathway which is super risky is

play48:40

that think about it like this okay right

play48:42

now we have a a thing where like you

play48:44

know in the future we're going to have

play48:46

to equip a lot of our Technologies with

play48:48

AI systems inside of them because if we

play48:50

don't they're just not going to be as

play48:51

effective and if we don't we're going to

play48:53

be get dominated by adversaries but of

play48:55

course everyone was stating that before

play48:57

AI got this good we all said we would

play49:00

never connect it to the internet and now

play49:02

it's connected to the internet and

play49:03

people are not batting an eye and the

play49:05

problem is is that like if we get an

play49:07

alignment failure AI is already in every

play49:09

single infrastructure so what happens

play49:11

when AI fails and it's in every single

play49:14

piece of technology so it's pretty

play49:16

insane and of course failures on a much

play49:19

larer model could be really really awful

play49:22

and here's another graphic which

play49:24

presents you know a lot of stuff this is

play49:26

where we have AGI you know reinforcement

play49:29

learning with human feedback the

play49:30

failures are low stakes the architecture

play49:32

and algorithms we do understand the

play49:33

backdrop of the world is pretty normal

play49:36

but this is where we get to Super

play49:37

intelligence and remember the transition

play49:39

here is only 2 to 3 years maximum so

play49:42

once we get to Super intelligence the

play49:43

the failures are catastrophic the

play49:46

architecture is alien and it's designed

play49:48

by the previous generation of super

play49:50

smart AI it's not going to be designed

play49:52

by humans okay and the world is going to

play49:54

be going crazy okay there's going to be

play49:55

extraordinary pressures to get this

play49:57

right and of course we have no ability

play49:59

to understand if these systems are even

play50:01

aligned what they're doing and then

play50:03

we're basically going to be entirely

play50:05

trusting and being reliant on these AI

play50:07

systems so how on Earth are we really

play50:09

even going to get this right and here's

play50:10

the thing okay no matter what we develop

play50:12

true superintelligence is likely able to

play50:15

get around most any security scheme and

play50:17

for example still they buy us a lot more

play50:20

margin for error and we're going to need

play50:22

any margin we can get now here's one of

play50:24

the scariest things that I think about

play50:26

and this is something that I saw in only

play50:28

one article covered like literally

play50:30

there's only one article covered there

play50:32

was one Reddit post that I think got

play50:33

removed about this so I'm not even sure

play50:35

if you know anyone's even watching at

play50:37

this point but um basically if you think

play50:39

about it before okay a dictator who

play50:41

wields the power of superintelligence

play50:43

would command concentrated power unlike

play50:45

anything we've ever seen think about it

play50:47

if you manag to control super

play50:48

intelligence which is of course kind of

play50:49

hard cuz we won't be able to align it we

play50:51

could have a situation where there is

play50:53

just complete dictatorship millions of

play50:55

AI controlled robotic law and

play50:56

enforcement agents could police their

play50:58

populace Mass surveillance would be

play51:00

hypercharged dictator loyal AIS could

play51:02

individually assess every single citizen

play51:05

for descent with near perfect lie

play51:08

detection sensor rooting out any

play51:09

disloyalty essentially the robotic

play51:11

military and police force could be

play51:13

wholly controlled by a single political

play51:15

leader and programmed to be perfectly

play51:17

obedient and there's going to be no

play51:19

risks of coups or rebellions and his

play51:22

strategy is going to be perfect because

play51:23

he has super intelligence behind them

play51:25

what does a look like when we have super

play51:27

intelligence in control by a dictator

play51:30

there's simply no version of that where

play51:31

you escape literally past dictatorships

play51:34

were not permanent okay but

play51:36

superintelligence could eliminate any

play51:38

historical threat to a dictator's Rule

play51:40

and lock in their power and of course if

play51:42

you believe in freedom and democracy

play51:44

this is an issue because someone in

play51:46

power even if they're good they could

play51:47

still stay in power but you still need

play51:49

the freedom and democracy to be able to

play51:51

choose this is why the Free World must

play51:53

Prevail so there is so much at stake

play51:56

here that this is why everyone is not

play51:58

taking this into account so let me know

play52:00

what you thought about situational

play52:01

awareness I do apologize for making this

play52:03

video so long but I'm glad I made this

play52:05

video so long because there was still a

play52:06

lot that I looked at that you know is

play52:08

not going to be covered in this video if

play52:10

you do want to watch the podcast I will

play52:12

leave a link to the video in the

play52:13

description where there is a 4-Hour

play52:15

podcast with Leo abrena and of course

play52:18

dra crash Patel in which they have an

play52:20

interview that is remarkably insightful

play52:23

like it's really really good because

play52:25

they just talk about a lot of stuff that

play52:27

you really should know so um if there

play52:29

was anything I missed in this video let

play52:30

me know what you guys think because I

play52:32

think this is probably going to be uh

play52:34

the piece of information that stays with

play52:35

me for the longest time because I'll be

play52:37

constantly revisiting this document to

play52:39

see if some of these predictions are

play52:40

coming true and where things are lining

play52:43

up

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceAGI ImpactTech ForecastLeopold AshAI ResearchFuture PredictionsSuperintelligenceAI EthicsSecurity ConcernsTech Revolution