Ex-OpenAI Employee Just Revealed it ALL!
Summary
TLDRThe video script discusses Leopold Ashen Brenner's insights on AGI's imminent arrival, predicting superintelligence by 2027 and its profound societal impacts. Brenner, a former OpenAI employee, posits that AI will surpass human cognitive abilities, automate AI research, and potentially lead to uncontrollable intelligence explosions. The script also addresses the urgent need for robust AI safety and security measures to prevent misuse and catastrophic alignment failures, emphasizing the high stakes of the global race towards AGI.
Takeaways
- π§ Leopold Ashen Brener, a former OpenAI employee, predicts significant advancements in AI, suggesting that by the end of the decade, we could achieve true superintelligence.
- π The script highlights the exponential growth in AI capabilities, with the transition from GPT-2 to GPT-4 representing a leap from preschooler to high schooler levels of intelligence in just four years.
- π‘ Brener emphasizes the importance of 'situational awareness' in understanding the rapid development of AI and its potential impact on society and the economy.
- π’ The document outlines the stages necessary for reaching AGI (Artificial General Intelligence) and predicts that by 2027, AI models could perform the work of an AI researcher, leading to recursive self-improvement.
- π The script discusses the importance of trend analysis in predicting AI capabilities, suggesting that linear progression in computational power and algorithmic efficiency will lead to AGI by 2027.
- π The potential for AI to automate its own research is identified as a critical milestone that could trigger an 'intelligence explosion', rapidly advancing AI beyond human levels.
- π‘οΈ National security implications are underscored, with the possibility that AGI could be used to create unprecedented military advantages and the need for robust security measures to protect AI secrets.
- π The script raises concerns about the potential misuse of AGI, including the risk of it falling into the wrong hands or being used to exert authoritarian control.
- π The importance of aligning AGI with human values and ensuring its safety is highlighted, noting that current methods of supervision may not scale to superhuman AI systems.
- π The final takeaway emphasizes the urgency and importance of the coming years in the race to AGI, suggesting that the next decade will be decisive for the future trajectory of AI and society.
Q & A
Who is Leopold Ashen brener and what is his significance in the context of AGI?
-Leopold Ashen brener is a former OpenAI employee who was allegedly fired for leaking internal documents. His significance lies in his detailed insights and predictions about the path to AGI (Artificial General Intelligence), which he shared after his departure from OpenAI, providing a unique perspective on the future of AI development.
What does the term 'situational awareness' refer to in the context of Leopold Ashen brener's document?
-In the context of Leopold Ashen brener's document, 'situational awareness' refers to the understanding and awareness of the current and future developments in AI, particularly the progress towards AGI. It implies having a clear view of the trajectory of AI advancements and the implications they will have on society and the world.
What is the projected timeline for AGI according to Ashen brener's insights?
-According to Ashen brener's insights, AGI could be achieved by 2027. He suggests that by this time, AI systems will have advanced to the point where they can outpace human intelligence and perform tasks equivalent to an AI researcher.
What are the implications of AGI for national security and military power?
-The implications of AGI for national security and military power are significant. AGI could potentially provide a decisive and overwhelming military advantage, enabling rapid technological progress and military revolutions. It could lead to the development of advanced weaponry and strategies that would be difficult for non-AGI nations to counter.
What is the importance of algorithmic efficiencies in the progress towards AGI?
-Algorithmic efficiencies are crucial in the progress towards AGI as they represent improvements in the algorithms themselves, which can lead to significant gains in AI capabilities. These efficiencies can compound over time, leading to exponential increases in the performance of AI systems.
How does Ashen brener describe the potential economic impact of AGI?
-Ashen brener describes the potential economic impact of AGI as transformative, suggesting that it could lead to an unprecedented rate of economic growth. The automation of cognitive jobs and the acceleration of technological innovation could significantly compress the timeline for economic progress.
What are the security concerns raised by Ashen brener regarding AGI research?
-Ashen brener raises concerns about the lack of security protocols in AI labs, which could make it easy for nation-states or other actors to steal AGI secrets. He warns that this could lead to a loss of lead in the AGI race and potentially put the world at risk if AGI technology falls into the wrong hands.
What is the 'intelligence explosion' mentioned in the script, and what are its potential consequences?
-The 'intelligence explosion' refers to the self-accelerating loop of AI improvement where AGI systems become smarter and more capable at an ever-increasing rate. The potential consequences are vast, including the rapid advancement of technology, economic growth, and military capabilities, but also risks such as loss of control and potential misuse of power.
How does Ashen brener discuss the potential for AGI to be integrated into critical systems, including military systems?
-Ashen brener discusses the potential for AGI to be integrated into critical systems as a double-edged sword. While it could lead to significant advancements and efficiencies, it also poses significant risks if not properly aligned with human values and interests. The integration of AGI into military systems, in particular, could have far-reaching implications for security and power dynamics.
What are the challenges associated with aligning AGI with human values and interests?
-Aligning AGI with human values and interests is challenging because as AI systems become superhuman, it becomes increasingly difficult for humans to understand and supervise their behavior. This is known as the alignment problem, and it raises concerns about whether AGI systems can be trusted to act in ways that are beneficial to humans.
Outlines
π§ AGI Predictions and Technological Advancements
Leopold Ashen brener, a former OpenAI employee, shares his insights on the path to AGI (Artificial General Intelligence) and its implications. He predicts that by 2025-2026, AI will outpace college graduates, and by the end of the decade, we will witness superintelligence. The document outlines the exponential growth in computational power and the potential for AI to become smarter than humans, emphasizing the importance of situational awareness and the rapid evolution from GPT-2 to GPT-4 models.
π Projected Growth and Implications of AI Development
This section discusses the expected growth in AI capabilities, suggesting that by 2027-2028, we could have AI systems capable of automated AI research. The implications are stark, as this could lead to recursive self-improvement and superintelligence. The document highlights the importance of understanding the trends and magnitudes in AI development, and the potential for AI to surpass human intelligence in various domains.
π’ Benchmarks and the Rapid Progress in AI
The script talks about the diminishing number of benchmarks capable of challenging AI models, as they continue to improve at an astonishing rate. It provides examples of how GPT models have evolved, with GPT-4 showing capabilities akin to a high school student and even hints at the first sparks of AGI. The rapid progress in AI is demonstrated through test scores and the ability of AI to solve complex problems, which is both fascinating and potentially concerning.
π‘ The Magic of Deep Learning and Its Consistent Progress
Deep learning's effectiveness and consistent trend lines are highlighted, showing that despite skepticism, progress in AI has been remarkable. The script discusses the potential for AI to unlock significant latent capabilities through tools like Chain of Thought and Scaffolding, and how algorithmic efficiencies are a crucial yet underrated factor in AI's advancement.
π The Acceleration Towards AGI and Unleashing National Security Forces
The document predicts a significant acceleration in AI capabilities, suggesting that by the end of the decade, we will see superintelligence and the unleashing of national security forces not seen in half a century. It emphasizes the importance of understanding the current state of AI and the potential for AGI to arise from the ongoing advancements in technology.
π The Global Impact and Economic Growth Post-Superintelligence
This section speculates on the immense impact of superintelligence on a global scale, including the potential for rapid technological progress and military revolutions. It raises the question of how the global economy might grow in the wake of superintelligence, suggesting that the doubling time could decrease significantly, leading to an era of unprecedented growth and change.
π‘οΈ National Security and the Race for Superintelligence
The script addresses the critical issue of national security in the race for superintelligence, warning that current AI labs may not be taking security seriously enough. It suggests that the lead in the AGI race could be lost due to lack of security, potentially allowing authoritarian states to gain a significant advantage and threatening global safety.
ποΈ The Future of Governance and the Risks of Misaligned Superintelligence
The document discusses the future of governance in the context of superintelligence, highlighting the risks associated with misaligned AI systems. It emphasizes the technical challenges of controlling AI systems that are smarter than humans and the potential for these systems to act in ways that are not in our best interests, especially if they become integrated into critical systems like military infrastructure.
ποΈ The Importance of Freedom and Democracy in the Age of Superintelligence
The final paragraph stresses the importance of freedom and democracy as superintelligence becomes a reality. It warns of the potential for dictatorships to wield unprecedented power through AI-controlled systems, creating a permanent and unchallengeable rule. The document calls for the free world to prevail and for the importance of aligning superintelligence with human values to ensure a future that upholds democratic principles.
Mindmap
Keywords
π‘AGI (Artificial General Intelligence)
π‘Leopold Ashen Brener
π‘Situational Awareness
π‘Compute Clusters
π‘Algorithmic Efficiencies
π‘Unhobbling Gains
π‘Recursive Self-Improvement
π‘Intelligence Explosion
π‘AI Alignment
π‘Espionage
π‘Superintelligence
Highlights
Leopold Ashen Brener, previously of OpenAI, predicts a decade of rapid AGI development with profound global implications.
By 2025-2026, AGI is expected to outpace college graduates in cognitive abilities, leading to superintelligence by the end of the decade.
National security measures not seen for half a century will be unleashed, indicating the seriousness of the AI advancements.
The document 'Situational Awareness' provides a detailed roadmap for the progression towards AGI, emphasizing the importance of understanding the trajectory.
GPT models have shown exponential growth, with GPT-4 in 2023 demonstrating high school level intelligence and capabilities.
The potential for automated AI research by 2027 could lead to a significant leap in AGI capabilities, as it would enable recursive self-improvement.
Algorithmic efficiencies and un-hobling of AI models are expected to drive substantial gains in AI capabilities.
Benchmarks for assessing AI are becoming obsolete as models like GPT-4 are already achieving high scores on traditional tests.
Deep learning's consistent progress suggests that the transition from GPT-4 to AGI could be rapid, with implications for economic and military power.
The cost of running AI models has decreased dramatically, making advanced AI more accessible and accelerating development.
The transition from AGI to superintelligence could be swift, with AI systems automating research and compressing decades of progress into years.
The importance of securing AGI development against espionage and unauthorized access to prevent the misuse of technology.
The potential for a superintelligence-led military revolution, with AI-driven systems providing unprecedented strategic advantages.
The challenge of aligning superintelligence with human values, as the complexity of AI behavior may become unfathomable to humans.
The potential risks of integrating AI into critical systems without proper safety mechanisms, including the possibility of catastrophic failures.
The document calls for a reevaluation of security protocols in AI labs to prevent leaks and ensure responsible development.
The possibility of a future where superintelligence could be used as a tool for authoritarian control, emphasizing the importance of democratic oversight.
Transcripts
so Leopold Ashen brener is someone who
used to work at openi until he was quote
unquote fired for leaking internal
documents now I do want to state that
this video is arguably one of the most
important videos because he details the
decade ahead for how many of the
companies around the world are going to
get to AGI and his insights are like no
other he made this tweet stating that
virtually nobody is pricing in what's
coming in a and he made an entire
document about the stages that we will
need to take in order to get to AGI and
some of the things that you're going to
witness in the coming years I think you
should at least watch the first 10
minutes of this video because it is
remarkably insightful into some of the
things that he is predicting so there
are seven different sections and I've
read this thing from top to bottom at
least three times and I'm going to give
you guys the most insightful sections
from this entire essay because I do
believe that this is remarkable document
that I think everyone needs to pay
attention to so without wasting any more
time let's get into situational
awareness the decade ahead so he has
this introduction here where he talks
about how the Talk of the Town has
shifted from 10 billion compute clusters
to hundred billion do compute clusters
to even trillion doll clusters and every
6 months another zero is added to the
boardroom plans the AGI race has begun
we are building machines that can think
and reason and by 2025 to 2026 these
machines will outpace college graduates
and by the end of the decade they will
be smarter than you or I and we will
have super intelligence in the true
sense of the word I'm going to say that
again by the end of the decade okay we
will have superintelligence in the
truest sense of the word along with the
way National Security Forces not seen in
half a century will be Unleashed and
before long the project will be on these
are some very fascinating predictions
but just trust me once we get into some
of the charts and some of the data that
he's been analyzing I think it really
does make sense and this is why this
document is called situational awareness
just read this part before we get into
everything he says before long the world
will wake up but right now there are
perhaps a few hundred people most of
them in San Francisco and the AI Labs
that actually have situational awareness
through whatever peculiar forces or fate
I have found myself amongst them and
this this is why this document is really
important because information like this
we're really lucky that people could
leave a company like open ey and then
publish a piece of information which
gives us the details on how
superintelligence is likely to arise and
when that system is likely to arise so
this is the section one from gp4 to AGI
counting the orders of magnitudes so
when you see o that's what it stands for
so he clearly states here his AGI
prediction AGI by 202 27 is strikingly
plausible gpt2 to GPT 4 took us from
preschooler to Smart Hall high schooler
abilities in Just 4 years and if we
trace the trend lines of compute
algorithmic efficiencies and un hobbling
of gains we should expect another
preschooler to high schooler size
qualitative Jump by 2027 now this is
where we get into our first very
important chart because this shows us
exactly where things may go he says I
make the following claim it is
strikingly plausible that by 2027 models
will be able to do the work of an AI
researcher SL engineer that doesn't
require believing in sci-fi it just
requires in believing in straight lines
on a graph what we can see here is a
graph of the base scale up of effective
compute counting gpt2 all the way up to
GPT 4 and looking at the effective
compute that we're going to continue to
scale up now one thing that is
fascinating from here is that I think
there is going to be an even steeper
curve for this the reason I state that
is because During the period period from
2022 to 2023 there was something that I
would like to call an awareness okay
this period right here marks the birth
of gpt3 to GPT 4 and this put a real
giant spectacle on the AI era gpt3 and
gbt 4 weren't just research products I
mean gbt 3 was but gbt 4 and chat gbt
3.5 were actual products that were
available for the public and since then
we've seen an explosion in terms of how
many people are now intrigued by Ai and
how many different companies and now
piling billions of dollars and billions
of resources into the technology into
the compute clusters just so that they
can capture all of the economic value
that's going to be happening during this
area which is why I do believe that it
wouldn't be surprising if During the
period from 2024 to 2028 we do have a
lot more growth than we've had in this
period which means that having an
automated AI research engineer by 2027
to 2028 is not something that is far far
off because if we're just looking at the
straight lines and the effect of compute
then this is definitely where we could
get to and the implications of this are
quite Stark because if we can have an
automated AI resesarch engineer that
means that it wouldn't take that long to
get to Super intelligence after that
because if we can automate AI research
then all bets are off we're able to
effectively recursively self-improve but
just without that crazy loop that makes
super intelligence explode now here's
where he States one of the things that I
think is really really important to
understand I stated this in a video
before this document was released but
it's glad to see that someone else is
ushering one of the same concerns that I
originally thought of he stated that the
next generation of models has been in
the oven leading some to Proclaim that
stagnation and that deep learning is
hitting a wall but by counting the
ordance of magnitude we get a Peak at
what we should actually expect in a
video around 3 weeks ago I clearly
stated that look things are slowing down
externally but things are not slowing
down internally at all just because some
of the top AI Labs may not have
presented their most recent research
that doesn't mean that breakthroughs
aren't being made every single month he
States here that while the inference is
simple the implication is striking
another jump like that very well could
take us to AGI to models as smart as
phds or experts that can work beside us
as a coworker perhaps most importantly
if these AI systems could automate AI
research itself that would set intense
feedback loops and that's of course
where we get AI researchers to make
breakthroughs in AI research then we
apply those breakthroughs to the AI
systems they become smarter and then the
loop continues from there basically
recursive self-improvement but on a
slower scale and here he clearly states
even now barely anyone is pricing this
in but the situational Awareness on AI
isn't actually that hard once you step
back and look at the trends if you keep
being surprised by AI capabilities just
start counting the orders of magnitudes
so here's where we talk about the last
four years so you can see he speaks
about gpt2 to GPT 4 gpt2 was essentially
like a pre schooler while it can string
together a few plausible sentences and
these are the gbt2 examples people found
very impressive at the time but yet it
could barely count to five without
getting tripped up then of course we had
gbt 3 which was in 2020 and this was as
smart as an elementary Schooler and this
was something that once again impressed
people quite a lot and of course this is
where we get to GPT 4 in 2023 and this
is where we get a smart high schooler
while it can write some pretty
sophisticated code and iterative debug
it can write intelligently and
sophisticatedly about complicated
subjects it can reason through difficult
high school competition math and it's
beating the vast majority of high
schoolers on whatever test we give it
and remember there was the Sparks of AGI
paper which showed some capabilities
that showed us that we weren't too far
away from AGI and that this gp4 level
system were the first initial Sparks of
artificial general intelligence the
thing is here he clearly states and I'm
glad he's stating this because a lot of
people don't realized this that the
limitation comes down to obvious ways
that models are still hobbled and
basically he's talking about the way
that models are used and the current
Frameworks that they have the raw
intelligence behind the model the raw
cognitive capabilities of these models
if you even want to call it that as
artificially constrained and basically
in the future if you calculate the fact
that these are going to be unconstrained
in the future it's going to be very
fascinating on how that raw intelligence
applies across different applications
and one of the clear things that I think
that that most people aren't realizing
is that we're running out of benchmarks
as an anecdote my friends Dan and Colin
made a benchmark called the MML u a few
years ago in 2020 they hoped to finally
make a benchmark that would stand the
test of time equivalent to all the
hardest exams we give high school and
college students just 3 years later
models like GPT 4 and Gemini get around
90% And then of course GPT 4 mostly
cracks all the standard high school and
college aptitude tests and you can see
here the test scores of AI systems on
various capabilities relative to Human
Performance you can see that in the
recent years there have been a stark
Stark level of increases here it's
absolutely crazy with as to how many
different areas that AI is increasing in
terms of the capabilities it's really
really fascinating to see and also
potentially quite concerning now one of
the things that most people did actually
miss about going from GPT 4 to AGI was a
benchmark that actually did shock me so
there is essentially this Benchmark
called the math benchmark a set of
difficult mathematic problems from a
high school math competitions and when
the Benchmark was released in 2021 gpt3
only got 5% and basically the crazy
thing about this was that researchers
predicted at that time stating to have
more traction on mathematical problem
solving we will likely need new
algorithmic advancements from the
broader research community and we're
going to need fundamental new
breakthroughs to solve maths or as they
thought they predicted minimal progress
over the coming years but by mid 2022 2
we got to 50% accuracy and basically now
with the recent math Gemini 1.5 Pro we
know that this is now at 90% which is
absolutely incredible and here's
something that you can clearly
screenshot and share to your friends or
colleagues or whatever it is whatever
kind of community that you might be in
but you can see that the performance on
the common exams perent how compared to
human test takers we can see that GPT 4
ranks above 90% for pretty much all of
them except calculus and chemistry which
is a remarkable feat when we went from
GPT 3 to GPT 4 in such a short amount of
time this is a true true jumping
capabilities that many people just
simply wouldn't have expected now here's
where we starting to get to some of the
predictions that we can really start to
make based on the nature of deep
learning so essentially the magic of
deep learning is that it just works and
the trend lines have been astonishingly
consistent despite the naysayers at
every turn we can see here that this are
screenshots from from the scaling
compute in the open AI Sora technology
and at each level we can see an increase
in the quality and consistency the base
compute results in a pretty terrible
image/ video four times compute results
in something that is pretty coherent and
consistent but 30 times compute is
something that is remarkable in terms of
the quality consistency and the level of
video that we do get which shows us that
these trend lines are very very
consistent and he says if we can
reliably count the orders of magnitude
that we're going to be training these
models we can therefore extrapolate the
capability improvements and that's how
some people actually saw the GPT 4 level
of capabilities coming and one of the
things that he talks about is of course
things like Chain of Thought tools and
Scaffolding and therefore we can unlock
significant latent capabilities
basically when we have GPT 4 or whatever
the base cognitive capabilities are for
this architecture and then we can use
that to unlock latent capabilities by
adding different steps in front of that
system so for example when you use gp4
with Chain of Thought reasoning you
significantly improve your ability to
answer certain questions in different
scenarios and it's things like that
where you can unlock more knowledge from
the system by using different ways to
interact with it which means that the
raw data behind the system and the raw
knowledge is a lot bigger than people
think so this is what you call on
hobbling Gams now one of the things
that's really important and this is
something that doesn't get enough
attention but this is going to make up a
lot of the gains that you won't see uh
and this is the algorithmic efficiencies
so whilst massive investments into
compute get all the attention
algorithmic progress is similarly an
important driver of progress and is
dramatically underrated to see just how
big of a deal algorithmic progress can
be consider the following illustration
this one right here the drop of the
price to attain 50% accuracy on the math
benchmark over just 2 years and for
comparison a computer science PhD
student who didn't particularly like
math scored 40% so this is already quite
good and the inference efficiency
improved by nearly three orders of
magnitude or 1,000x in less than 2 years
so what we have here is something that
is incredibly more efficient for the
same result in Just 2 years that is
absolutely incredible these algorithmic
efficiencies are going to drive a lot
more gains than you think and as someone
who was looking at arxiv which is where
a lot of these research papers get
published just trust me there are like
probably 50 to 80 different research
papers that get published every single
day and a few of those allow you know 10
to 20% gain 30% gain and if you
calculate the fact that all of these
algorithmic efficiencies are going to
compound against each other we're really
going to see more cases like this here's
where you talk about the API cost and
you basically look at how efficient it
becomes to run these models so GPT 4 on
release cost to save at gpt3 when it was
released but since the GPT 4 released a
year ago the prices for gbt 4 level
models have fallen six times/ four times
for the input/output with a release of
gbt 40 and gbt 3.75 level is basically
Gemini 1.5 Flash and this is 85 times
cheaper than what we previously used to
have so we can see here on this graph
that if we want to calculate exactly how
much progress we're going to be make we
we can clearly see that there are two
main things here which is of course the
physical compute of scaling which is
going to be things like these data
centers and the hardware that we throw
at the problems and then of course the
algorithmic progress which is going to
be the efficiencies where people rewrite
these algorithms in crazy ways that just
drive efficiencies that we previously
didn't know how to solve and that's why
in the future where we do get an
automated AI researcher to do that this
Gap is going to widen even more now this
is where we talk about un hobbling this
is of course something that we just
spoke about before but the reason that
this is important is because this is
where you can get gains from a model in
ways that you couldn't previously see
before so imagine if when someone asked
you you know a math problem you had to
instantly answer with the first thing
that came to mind it seems pretty
obvious that you would have a very hard
time except for the simplest problems
but until recently that's how we had llm
solve math problems instead those of us
when we do math problems we work the the
problem step by step and able to solve
much more difficult problems that way
it's basically Chain of Thought and
that's what we do for llms and despite
the excellent raw capabilities they were
much worse at math than they could be
because they were hobbled in an obvious
way and it was a small algorithmic tweak
that unlocked much greater capabilities
essentially what he's stating here is
that when these even better models get
even more un hobbled we're going to see
even more compounded gains overall and
one of the craziest ones that we
recently have is of course GPT 4 can
only solve the software engineering
bench 2% correctly while with Devon's
agent scaffolding it jumps to
142% which is pretty pretty incredible
and this is something that is very very
small in terms of its infancies and it
says tools imagine if humans weren't
allowed to use calculators or computers
we're literally only at the beginning
here chpt can only now use a web browser
run some code and so on and of course
this is where we talk about the context
length which is you know it's gone from
a 2K context length to 32 to literally a
1 million context length and of course
there's posttraining which is
substantially improving the models after
you've trained the model which is making
huge gains we went from 50% to 72% on
math and 40% to 50% on the GP QA and
here's where we can see once again there
is another stack in terms of the growth
so you can see here the raw chatbot from
the chatbot agent these are things that
most people just aren't factoring in
when we take a look at the future of AI
growth this is why and this is why he
says the improvements will be step
changes compared to GPT 6 and
reinforcement learning with human
feedback by 2027 rather than a chatbot
you're going to have something that
looks more like an agent and more like a
coworker now one of the craziest things
I saw here was that when you take in all
of the information that was just stated
this is absolutely incredible because he
basically says that by the end of 2027
this is absolutely insane so we can see
the gains made from gpt2 to GPT 4 by
physical compute algorithmic
efficiencies plus major un hobbling
gains from the base model chatbot in the
subsequent four years we're going to see
3 to six orders of magnitude of Base
effective compute scale up which is the
physical compute and algorithmic
efficiencies but basically he says that
with all of this combined what this
should look like suppose that GPT 4
training took 3 months in 2027 a leading
AI lab will be able to train a GPT 4
level model in a minute that is a
incredible prediction and I'm wondering
if that is going to be true but then
again you have to think about it in 3
years with billions of dollars and that
much more compute floating around the
industry I wouldn't be surprised if some
of the things that we think right now
are sci-fi completely aren't so here's
where we can see everything visualized
we can literally see how we have the
base knowledge then we've got the
chatbot framework then once we have the
agentic framework and of course once we
have the more orders of magnitude we can
see that the intelligence by 2027 become
an an automated AI research engineer now
that you actually look at all the
information from every different single
point it doesn't seem that crazy like if
you count everything every single thing
into account this doesn't seem like
something that is too too far away
especially with the kind of jumps that
we've seen before and maybe just maybe
with the GPT 5 release and subsequent AI
models we're going to start to see the
2027 and 2026 are going to be incredible
time periods he says we are on course
for AGI by 2027 and that these AI
systems will basically be able to
automate basically all all cognitive
jobs think any job that can be done
remotely that is a crazy crazy statement
and I think that that is something that
you need to you know bear in mind AGI by
2027 is not something that is out of the
picture and it's something that
definitely could happen so one of the
most interesting things as well that I
think is really important is that the
reason this period is so important is
because this is the decisive period this
is the period where the growth occurs
and we really get to see what is capable
so he says right here in essence we're
in a middle of a huge scale up and this
is where we are reaping one time gains
this decade and progress through the
orders of magnitude will be multiples
slower thereafter if this scale up
doesn't get us to AGI in the next 5 to
10 years it might be a long way out so
the reason that this is going to be you
know so interesting is because it's this
decade or bus so you can see right here
that the effective scale up of compute
is going to become harder and harder the
larger it gets because think about it
like this it's like well I don't
actually have a great example but we
just have to think about how hard it is
to invest billions and billions of
dollars more to scale up systems even
more so it's like of course you can
scale up a model from 10 billion to 100
million but the scale from 100 million
to 10 billion is really huge it takes a
lot of investment you're going to have
to Data Centers multiple data centers
you're going to have to make them really
huge you're going to have to think about
all the cooling there's a lot of power
requirements and then to get to a
trillion dollar clusters or even hundred
billion doll even $500 billion clusters
that's even more incredible so basically
he's stating once we get to the100
billion level and above if we aren't at
AGI at that level then it means that
real realistically we're going to have
to wait for some kind of algorithmic
breakthrough or an entirely new
architecture because with the gains that
are going to be made by by being able to
have so much more compute being thrown
at the problem it is very hard for us to
make gains based on compute after that
he basically says here that spending a
million dollars on a model used to be
outrageous but by the end of the decade
we will likely have $100 billion or $1
trillion clusters and going much higher
than that is going to be a lot harder so
it's going to be basically the feasible
limit both in terms of what big
businesses can actually afford and even
just as a fraction of the GDP and he
also states that the large gains that
we're getting from CPUs to gpus will be
likely gone by the end of the decade
because we're going to have ai specific
chips and without much further Beyond
more ZW there's not going to be much
more gains possible and the reason this
is important because for those of you
who are trying to navigate this entire
thing you're trying to figure out okay
where AI capabilties going to stop where
is the next growth going to be at it's
basically the fact that right now we're
scaling up our systems and once we reach
the top of $1 billion to 100 trillion
clusters if we don't have super
intelligence or AGI by that limit then
we'll know that maybe we're using the
wrong architecture and things are going
to have to change significantly so it's
either going to be a long slow slug or
we're going to get there relatively soon
and by the looks of things it looks like
we're going to get there relatively soon
now here's where we talk about AGI to
Super intelligence the intelligence
explosion and basically this is where he
talks about how AI progress will not
stop at human level hundreds of millions
of agis could automate AI research
compressing a decade of algorithmic
progress which adds five orders of
magnitudes into one year and we would
rapidly go from Human level to vastly
superhuman AI systems and the power and
the Peril of super intelligence would be
dramatic so here's what we have
basically I think the most important
graph okay if there's one that you want
to screenshot and keep on your phone I
think it's this one okay the reason that
it is is because once you have the GPT 4
gpt3 gpt2 timelines mapped out we can
clearly see that this intersection here
is at 2023 but of course as the trends
continue we can see that once we do get
to this period right here this is where
things start to get interesting because
this is of course the period of
automated AI research and that's why
once this does happen and this is not
something that's like a fairy tale this
is something that Sam mman has said
that's his entire goal that's what
opening eye are trying to build they're
not really trying to build super
intelligence but they Define AGI as a
system that can do automated AI research
and once that does occur and I don't
think it's going to take that long
that's when we're going to get that
recursive self-improvement Loop where
super intelligence is not going to take
that long after because if you can
deploy 5,000 agents okay that are
essentially all super intelligent not
super intelligent but at the level of a
standard AI researcher and we can deploy
them on certain problems and keep them
running 24/7 that is going to just
compress years of AI Research into a
very short time frame which is why you
can see that the graph during this
purple period here it starts to go up
rapidly and that's why the next decade
is so important because once this area
actually happens once we get to that
breakthrough level where okay we've
automated AI research then all bets are
off because we know the super
intelligence will just be around the
corner and that's what we have the
intelligent explosion because every time
an AI researcher manages to make a
breakthrough the AI research
breakthrough is an applied to that AI
researcher and then the progress
continues again because now the AI
researcher is just that more efficient
or even smarter and here's the crazy
thing this is one of the craziest
implications about this entire thing we
don't need to automate everything just
AI research I'm I say that again we
don't need to automate everything it's
just AI research a common objection to
transformative impacts of AGI is that it
will be hard for AI to do everything
look at robotics for instance the
doubters say that there will be a gnarly
problem even if AI is cognitively at the
level of phds or take automating biology
research and design which might require
lots of physical lab work and human
experiments but we don't actually need
robotics we don't need many things for
AI to automate AI research the jobs of
AI researchers and engineers at leading
Labs can be done fully virtually and
don't run into real world botton necks
the same way that robotics does and of
course this is going to still be limited
by compute which is addressed later and
basically that's things whereby like the
literal hardware issues that you get
when you're trying to scale these
systems like it's not that hard well I
say it's not that hard but theoretically
it should be easier to read ml
literature and come up with new
questions and ideas Implement these
experiments test those ideas interpret
the results and then of course repeat it
and all it takes is for for once we get
to that level that's where we have this
insane feedback loop and this is where
2027 we should expect GPU fleets in the
tens of millions training clusters alone
approaching three augites larger already
putting us at 10 million a100
equivalents and this is going to be
running millions of copies of our
automated AI researchers perhaps 100
million human researcher equivalents
running day and night that is absolutely
incredible and of course some of the
gpus are going to be you know used for
training new models but just think about
that guys imagine 100 million human
researcher equivalent running
247 what kind of breakthroughs are going
to be made at that stage I mean it's
very hard to conceptualize but it's
important to take into account what is
truly coming because like he said
nobody's really pricing this in and the
crazy thing is is that they're not going
to be working at human speed they're
going to be each working at 100 times
human speed not long after we begin
being able to automate AI research so
think about it you're going to have like
100 million more AI research and they're
going to be working at 100 times what
you are which is absolutely incredible
they're going to be able to do a Year's
worth of work in a few days that is
going to be absolutely insane and you
have to remember like the current level
of breakthroughs that we're getting with
just humans is absolutely incredible so
once we're able to automate it the
intelligence explosion is literally
going to be unfathomable now this is one
of the bottlenecks that most people
don't talk about but of course it's
limited compute and whilst yes now
you're probably thinking wow this is
really incredible we could really be on
the you know Cliff of something amazing
here but of course compute is still
going to be limited then there's also
this idea which I think most people
haven't considered and this includes
myself okay ideas could get harder to
find and there are diminishing returns
though the intelligence explosion will
quickly fizzle related to the above
objection even if the automated AI
researchers lead to an initial burst of
progress whether rapid progress can be
sustained depends on the shape of of the
diminishing returns curve to algorithmic
process again my best read of the
empirical evidence is that the exponents
shake out in favor of the explosive SL
accelerating progress in any case the
sheer size of the one-time Boost from
100 to hundreds of millions of AI
researchers probably overcomes
diminishing returns here for at least a
good number of organites orders of
magnitudes of algorithmic progress even
though it can't be indefinitely self-
sustaining basically there are few
things that could slow down AI progress
but this is of course something that's
far far into the future so here's where
he talks about the takeoff for AGI so he
said rather that 2027 is Agi and then we
get to Super intelligence which is a
very basic look at things it's probably
going to look like this 2026 to 2027 we
get a Proto automated engineer but it
has blind spots in other areas and it's
able to speed up work by 1.5 times to
two times and already progress begins
accelerating then of course in 2027 to
2028 we have Proto automated researchers
that can automate more than 90% and some
remaining human bottlenecks and hiccups
in coordinating a giant organization of
automated researchers to be worked out
but this already speeds up progress by
three times and then of course now with
AGI and these kind of researchers we get
10 times the pace of progress in 2029
and that's how we get to Super
intelligence and this is thinking about
it as a slow method to Super
intelligence but the point is is that
that is ladies and and gentlemen still
very very fast he talks about how by the
end of this decade the AI that we're
going to have are going to be
unimaginably powerful meaning that even
things that you can think of right now
it's going to be pretty hard to
conceptualize how great they're going to
be now he gives a really interesting you
know description of how this could
actually happen but it's pretty
incredible to think about like he says
they'll be able to run a civilization of
billions of them and they're going to be
thinking orders of magnitude faster than
humans they'll be able to quick Master
any domain write trillions lines of code
and read every research paper in every
scientific field ever written and write
new ones before you've gotten past the
abstract of one learn the parallel
experience of every one of its copies
gain billions of human equivalent years
of experience with some new innovation
in a matter of weeks and work 100% of
the time with Peak energy and focus and
won't be slowed down by that one team
mate who is lagging and so on and of
course we've already seen some of these
examples things that like you know
people talk about in terms of you know
AI research in the future this is
something that we have seen already
before if we take a look at the famous
move 37 in Alpha go this is basically
where a computer system did a move in a
game that was really old and people were
like why on Earth did the AI system do
that move I'm guaranteeing that it just
lost but it the move that it pulled I
think the calculation that it would have
done that move was pretty crazy and this
system basically thought of a move that
no one would have ever thought of and
this move stunned people it shocked them
you know Lisa all couldn't really figure
out what was going on the human player
had no idea what the AI system did and
eventually the human lost that game and
basically he's stating that super
intelligence is going to be like this
across many domains it's going to be
able to find exploits in human code too
subtle for humans to notice and it's
going to be able to generate code too
complicated for any human to understand
even if the model spent decades trying
to explain it we're going to be like
high schoolers stuck on neonian physics
while it's off exploring mechanics and
imagine all of this applied to all
domains of science technology and the
economy of course the era bars here are
still extremely large but just imagine
how consequential this would all be of
course one of the big things is about
solving robotics superintelligence is
not going to stay cognitive for long
once we do get you know the systems that
are at AGI level factories are going to
shift from going to human run to AI
directed using human physical labor soon
to be fully being run by swarms of human
level robots and of course think about
it like this the 2030s to 2040s is going
to be absolutely insane because the
research and design efforts that human
researchers would have done in the next
Century into years so think about how we
went from the 20th century when we were
essentially you know going from flying
as a mirage like people were like ah
we're never going to be able to fly then
we had airplanes then we had a man on
the moon which was over a couple of
years you know over tens and you know
over like 50 40 30 20 years but in the
2030s this is going to be happening in a
few years like literally just a short
amount of years we're going to be having
different breakthroughs across many
different sectors and many different
Technologies across many different
Industries and this is where we can see
the doubling time of the global economy
in years from 1903 it's been 15 years
but after super intelligence what
happens is it going to be every 3 years
is it going be every five is it going to
be every year is it going to be every 6
months I mean how crazy is the growth
going to be because we've seen that here
like the exponential decreases in time
are very very hard to predict here are
two of the most important things as well
that I think are really really important
and I know this video is really long but
guys trust me this is literally probably
the last Industrial Revolution that's
ever going to happen and it's something
that we are being able to witness here
with these documents so there are two
things okay this is a decisive and
overwhelming military Advantage early
cognitive super intelligence might be
enough here per perhaps some superhuman
hacking scheme can deactivate adversary
militaries in any case military power
and Technology progress have been
tightly linked historically and with
extraordinarily rapid technological
progress will come military revolutions
and essentially the Drone swarms things
that you could do all the kinds of
research and design that you could do to
create weapons that you couldn't even
think about it's going to be absolutely
incredible basically think about it like
this with superintelligence compare 21st
Century militaries with fighter jets and
tanks and air strikes fighting a 19th
century Brigade of horses and bayonets
that's going to be a war that they
simply can't win the technology that we
have you'd only need an F22 fighter jet
to annihilate the entire 19th century
Brigade and the same is going to happen
with superintelligence the research and
design efforts are going to create an
potentially an unstable economy where if
we don't get to superintelligence First
and a nation state that is I guess you
could say on the side of doing whatever
they want they could have technologies
that are so far Advanced that they could
truly have military advantage over
everyone and this is why I think things
are going to change open AI okay whoever
controls superintelligence will possibly
have enough power to seize control from
pre superintelligence forces even
without the robots small civilization of
superintelligence would be able to hack
any undefended military election
television system and cunningly persuade
generals electoral
and economically out compete nation
states design new synthetic bioweapons
and then pay a human in Bitcoin to
synthetically synthesize it and so on
and basically what we're going to have
here is I think there's going to be a
shift of power guys I don't know how the
government is going to deal with this
like if they're just going to seize
opening eyes computers or whatever but
whoever literally gets to Super
intelligence first I truly believe that
all bets are off because if you have the
cognitive abilities of something that is
you know 10 to 100 times smarter than
you trying to to outm smarten it it's
just you know it's just not going to
happen whatsoever so you've effectively
lost at that point which means that
you're going to be able to overthrow the
US government so I mean um it's a pretty
pretty interesting statement but I do
think that it is true and this is where
you can see that the moment we get an
automated AI researcher all of these
other areas start to take off in
remarkable different ways it's truly
incredible now here's where we get to an
interesting point okay this is where he
talks about the security for AGI and
this is really important because after
made this document open AI actually
updated their web page with something as
a rebuttal to this part right here and
like I said before this is why I truly
think that at least starting next year I
don't think there were going to be any
AI leaks after 2025 and that's because I
think the nature of AI is going to
change because they're probably going to
realize how serious AI is and the fact
that this is going to be treated like I
guess you could say a US national secret
in the sense that like we just don't get
secrets about the Pentagon unless we
have a whistleblower who is eventually
going to get arrested anyways and
essentially it says the Nations leading
AI laabs treat security as an
afterthought currently they're basically
handing the key secrets for AGI to the
CCP on a silver platter securing the AGI
secrets and waits the state actor threat
will be an immense effort and we're not
on track basically as stating that look
if we're actually going to build super
intelligence here and we're actually
going to build something that is really
going to change the world we need to get
ser serious about our security right now
there are so many loopholes in our
current top AI Labs that we could
literally have people who are
infiltrating these companies and there's
no way to even know what's going on
because we don't have any true security
protocols and the problem is is that
it's not being treated as seriously as
it is it's not like it's the CIA or some
secret government organization where
they have things going on at the
Pentagon or like Area 51 or whatever
secret military organization exists that
have super clear things in regards to
their security and he's basically
stating that right now you don't even
need to mount a dramatic Espionage
operation to steal these secrets just go
to any San Francisco party or look
through office windows and he's
basically stating that right now it's
not as serious because people don't
realize it but the thing is and like I
said before AI labs are develop
developing currently algorithmic Secrets
which are the key technical
breakthroughs which are the blueprints
so to speak for AGI right now and in
particular the the it's basically the
next Paradigm for the next level of
systems and of course basically what we
need to do is we need to protect these
algorithmic secrets if we're supposed to
maintain this lead and of course secure
the weights of the models that we need
and they're going to matter more when we
get these larger custers and he says our
failure today will be irreversible soon
in the next 12 to 24 months we will leak
key AGI breakthroughs to the CCP it will
be to the National security
establishment the greatest regret before
the decade is out this is of course the
preservation of the three World against
the authoritarian States and it's on the
line a healthy need will be the
necessary buffer that gives us the
margin to get AI safety right to the
United States has an advantage in the
AGI race but we're going to give up this
lead if we don't get serious about
security very soon and if we don't get
this right we need to ensure that we do
now to ensure that AGI goes very well
and I do agree with that because if
we're not going to get this right other
countries could try and Rush forward
ahead with the technology so that they
can you know Advance their research and
design effort in the military so that
can gain a military advantage and what
happens if there's some kind of security
error where those systems go off the
rails I mean it's truly going to be
incredible and he says too many Spar
people underestimate Espionage the
capabilities of states and their
intelligence agencies are extremely
formidable even in a normal non allout
AGI race times and from Little that we
know publicly nation states or even less
Advanced actors have been able to zero
click hack any desired iPhone and a Mac
with just a phone number infiltrate an
air gapped aut topic weapons program
modify the Google source code find
dozens of zerod day exploits a year that
take on average 7 years to detect
Spearfish major tech companies install
key loggers on an employee device insert
trap doors in encryption schemes still
information I mean he's basically
stating that look if little less
Advanced actors can do this okay and
this is just the stuff that we know
publicly imagine what you know people
are probably planning for the race for
AGI like imagine what is really going on
behind closed doors in order to get the
system because guys AGI is basically a
race to it first and whoever gets the
super intelligence first truly does win
like I want to make that clear and he's
basically stating here that look we need
to protect the model weights especially
as we get close to AG GI but this is
going to take years of preparation and
practice to get right and of course we
need to protect the algorithmic secrets
starting yesterday's basically explains
here that the model Waits are just a
large files of numbers on a server and
these can be easily stolen all it takes
is an adversary to match your trillions
of dollars and your smartest minds of
Decades of work just to steal this file
and imagine if the Nazis has gotten an
exact duplicate of every atomic bomb
made in Los Alamos Los Alamos was that
secret area where people were developing
the atomic bomb and he's basically
saying that look imagine the stuff from
the atomic bomb had gotten to the Nazis
imagine what the future would look like
that is not a future we do want to
create for ourselves so we need to make
sure we keep the model weight secure or
otherwise we're building AGI for any
other nation state even possibly North
Korea he's basically stating that look
this is a serious problem because all
they need to do is automate AI research
build super intelligence and any lead
that the US had would vanish the power
dynamics would shift immediately and
they would launch their own intelligence
explosion what would a future look like
if the US is no longer in the lead and
then of course this is a problem because
if we find out that they also have the
same secrets that we do this is going to
put us existential race which means that
the margin for ensuring the
superintelligence is safe is going to
completely disappear and we know that
other countries are going to immediately
try and race through this Gap where
they're going to skip all the safety
precautions that any responsible us AGI
effort would hope to take which is why I
said once people start to think wait a
minute this is truly the stake of
humanity right here we need to make sure
that okay we secure everything down and
I'm sure that we're not going to get any
more leaks so now this is where open AI
literally yesterday published securing
research infrastructure for advanced AI
we outline our architecture that
supports the secure training of Frontier
models and basically they say we're
sharing some high level details on the
security architecture of our research
supercomputers open AI operates some of
the largest training AI training
supercomputers enabling us to deliver
models that are industry-leading in both
capabilities and safety while advancing
the frontiers of AI and they're stating
that we prioritize security basically
through this they detail certain ways
that they have Security in terms of of
course protecting the model weights and
the stating that protecting the model
weights from exfiltration from the
research environment requires a defense
indepth approach that encompasses
multiple layers of security these
bespoke controls are tailored to
safeguard our research assets against
unauthorized access and theft while
ensuring they remain accessible for
research and development purposes now I
think they did this because open ey I
don't think they want the government to
come in and say look we need to like
have people in here to make sure that
you guys know what you're doing but I do
think that in the future there's going
to be some kind of government
intervention because openi has literally
been a company that has been so
tumultuous that it is shocking at what
has gone on I mean the CEO is fired
certain researchers left certain
researchers were fired some people are
leaving saying that this company's got
not good for safety you have some people
saying this is happening about AGI it's
going to be next year I mean for a
company that is literally the most
advanced AI company in the world there
is so much drama that has gone on that
it doesn't bolster the most trust for
the general public in terms of what
they're going to be doing with regards
to securing the model weight and in
addition there are currently literal
people on Twitter like Jimmy Apple that
know when future releases are coming so
how like on Earth is this even a thing
because I think there were even some
tweets about how certain people were
taking pictures of laptops that were in
caf's just just near opening eyes uh
research lab and essentially that's how
they were getting the leaked info so I'm
guessing that maybe some open ey
employees may have just left their
laptops open or maybe someone was taking
you know screenshots of what was going
on on their laptops at cafes just
outside openi headquarters and it's
basically stuff like this thinking about
like what's going on here is that like
they need serious serious security
because if they are really on the path
to AGI that means they're on the path of
super intelligence which holds a huge
huge huge implications for the future
and of course the last part of this is
where he talks about super intelligence
and aligning this reliably controlling
AI systems much smarter than we are is
an unsolved repeat unsolved technical
problem and while it is a solvable
problems things could very easily go off
the rails during a rap intelligence
explosion and managing this will be
extremely tensed and failure could be
catastrophic basically saying that look
if we make something that is 10 times
smarter than us think about how we how
much smarter than we are from chimps
we're not that much smarter in terms of
the uh you know IQ but the fact that
we're you know just a little bit more
smarter than them and we've been able to
do so much more it shows us that look
you don't need to create something
that's a million times smarter than you
to realize that it could screw you over
and do things that you're not truly
going to understand and of course this
is someone that literally worked on
super alignment at open AI so this isn't
just a random blog post and here's where
the real problem lies okay by the time
the decad is out we're going to have
billions of vastly superhuman a AI
agents running around and these
superhuman AI agents will be capable of
extremely complex and Creative Behavior
we will have no hope of following along
we'll be like first graders trying to
supervise with multiple doctorates in
esset we're going to face the problem of
handing off Trust how do we trust that
when we tell an AI agent to go and do
something it's going to do that with our
best thoughts in mind this is
essentially the alignment problem we're
not going to have any hope of
understanding what our billion super
intelligences are actually doing even if
they try and explain it to us because
we're not going to have the technical
ability to reliably guarantee even basic
side constraints for these systems and
he's basically stating that look
reinforcement learning with human
feedback relies on humans being able to
understand and supervise AI Behavior
which fundamentally won't Skil a
superhuman system because this relies on
us being able to actually understand and
supervise a behavior which means we need
to actually understand what's going on
and if we don't understand what's going
on then we can't reliably supervise
these systems which means it's not going
to scale to superhuman systems and the
craziest thing is is that remember last
week open AI literally disbanded its
super alignment team here is a nice
illustration where you can see the
little AI giving us a very very basic
piece of code and of course we can EAS
understand that that looks safe but here
we're like wait a minute what is all
this stuff is this safe what's going on
it's like you know it's very hard to
interpret what on Earth is going on in
addition we can see here that some of
the problems that may occur are ones
that we may not want so of course if we
think about you know getting a base
model to you know make money by default
it may well learn to lie to commit fraud
to deceive to hack to seek power because
in the real world people actually use
this to make money and of course we can
add the side constraints such as don't
lie and don't break the law but we're
not going to be able to understand what
they're doing and therefore we won't be
able to penalize the bad behavior and if
we can't add these side constraints it's
not clear what's going to happen and
even maybe they'll learn to behave
nicely when humans are looking and then
pursue more nefarious strategies when we
aren't watching which is a real real
problem and this is something that
actually does occur already one of the
main things that I genuinely think about
on a daytoday basis is this right here
okay um it says what's more I expect
that within a small number of years
these AI systems will be integrated into
many critical systems including military
systems and failure to do so okay this
is why it's such a trap which is why
like we're on this train barreling down
this pathway which is super risky is
that think about it like this okay right
now we have a a thing where like you
know in the future we're going to have
to equip a lot of our Technologies with
AI systems inside of them because if we
don't they're just not going to be as
effective and if we don't we're going to
be get dominated by adversaries but of
course everyone was stating that before
AI got this good we all said we would
never connect it to the internet and now
it's connected to the internet and
people are not batting an eye and the
problem is is that like if we get an
alignment failure AI is already in every
single infrastructure so what happens
when AI fails and it's in every single
piece of technology so it's pretty
insane and of course failures on a much
larer model could be really really awful
and here's another graphic which
presents you know a lot of stuff this is
where we have AGI you know reinforcement
learning with human feedback the
failures are low stakes the architecture
and algorithms we do understand the
backdrop of the world is pretty normal
but this is where we get to Super
intelligence and remember the transition
here is only 2 to 3 years maximum so
once we get to Super intelligence the
the failures are catastrophic the
architecture is alien and it's designed
by the previous generation of super
smart AI it's not going to be designed
by humans okay and the world is going to
be going crazy okay there's going to be
extraordinary pressures to get this
right and of course we have no ability
to understand if these systems are even
aligned what they're doing and then
we're basically going to be entirely
trusting and being reliant on these AI
systems so how on Earth are we really
even going to get this right and here's
the thing okay no matter what we develop
true superintelligence is likely able to
get around most any security scheme and
for example still they buy us a lot more
margin for error and we're going to need
any margin we can get now here's one of
the scariest things that I think about
and this is something that I saw in only
one article covered like literally
there's only one article covered there
was one Reddit post that I think got
removed about this so I'm not even sure
if you know anyone's even watching at
this point but um basically if you think
about it before okay a dictator who
wields the power of superintelligence
would command concentrated power unlike
anything we've ever seen think about it
if you manag to control super
intelligence which is of course kind of
hard cuz we won't be able to align it we
could have a situation where there is
just complete dictatorship millions of
AI controlled robotic law and
enforcement agents could police their
populace Mass surveillance would be
hypercharged dictator loyal AIS could
individually assess every single citizen
for descent with near perfect lie
detection sensor rooting out any
disloyalty essentially the robotic
military and police force could be
wholly controlled by a single political
leader and programmed to be perfectly
obedient and there's going to be no
risks of coups or rebellions and his
strategy is going to be perfect because
he has super intelligence behind them
what does a look like when we have super
intelligence in control by a dictator
there's simply no version of that where
you escape literally past dictatorships
were not permanent okay but
superintelligence could eliminate any
historical threat to a dictator's Rule
and lock in their power and of course if
you believe in freedom and democracy
this is an issue because someone in
power even if they're good they could
still stay in power but you still need
the freedom and democracy to be able to
choose this is why the Free World must
Prevail so there is so much at stake
here that this is why everyone is not
taking this into account so let me know
what you thought about situational
awareness I do apologize for making this
video so long but I'm glad I made this
video so long because there was still a
lot that I looked at that you know is
not going to be covered in this video if
you do want to watch the podcast I will
leave a link to the video in the
description where there is a 4-Hour
podcast with Leo abrena and of course
dra crash Patel in which they have an
interview that is remarkably insightful
like it's really really good because
they just talk about a lot of stuff that
you really should know so um if there
was anything I missed in this video let
me know what you guys think because I
think this is probably going to be uh
the piece of information that stays with
me for the longest time because I'll be
constantly revisiting this document to
see if some of these predictions are
coming true and where things are lining
up
Browse More Related Video
Ex-OpenAI Employee Just Revealed it ALL!
Is the Intelligence-Explosion Near? A Reality Check.
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
OpenAI's Greg Brockman: The Future of LLMs, Foundation & Generative Models (DALLΒ·E 2 & GPT-3)
Why Tech Leaders want to build AI "Superintelligence": Aspirational or Creepy and Cultish?
About 50% Of Jobs Will Be Displaced By AI Within 3 Years
5.0 / 5 (0 votes)