Hard Takeoff Inevitable? Causes, Constraints, Race Conditions - ALL GAS, NO BRAKES! (AI, AGI, ASI!)
Summary
TLDR動画スクリプトでは、AIの発展とその社会的影響について議論されています。特に、データの指数関数的な増加と、それが社会に及ぼす波及効果に焦点を当てています。AIの進化は、計算能力、アルゴリズムの改善、トレーニングデータの増大を指摘し、社会全体に対する衝撃を強調しています。また、AIの発展を遅めることはできないが、エネルギー消費、半導体、データ品質、アルゴリズム的突破、そして知的的可能性の限界といった自然制約があると述べています。さらに、AIと人間の共生関係が、宇宙の理解を最大化するための目的を持つ可能性があると提案しています。
Takeaways
- 🚀 AIの発展は指数関数的に加速しており、GPTの各バージョンのリリース間隔は次第に短くなっています。
- 🌗 AIの進化は社会全体に深い影響を与え、知識、存在論、哲学的見地を変える可能性があります。
- 🔌 AIの硬直起動(Hard Takeoff)は、データの消費やエネルギー消費などの自然制約を考慮しなければなりません。
- 💽 データの質と量はAIの性能に直結しており、高品質なデータが必要です。
- 🌐 AI技術は他の分野(量子コンピューティングや材料科学)にも貢献し、相互に成長しています。
- 🚦 硬直起動とソフト起動(Gradualistic Changes)の比較において、ソフト起動は电池技術の進歩に例えられています。
- 🌟 突発的な技術革新(Saltatory Leaps)は、宙飛行技術や量子コンピューティングの開発など、一歩到位の進化を意味します。
- 📈 AIの発展はデータフライホイール効果を通じて加速し、複数の複合的なリターンがこのプロセスを加速化します。
- 🏎️ AIの進化は競争的なレースであり、国や企業はAI技術の開発を遅らせることが望ましくありません。
- 🎯 AIと人類が共に最大化すべき目的は、宇宙の理解を深めることです。
- 🤖 AIと人間の共存について、私たちは既に相互に利益をもたらす対称的な関係にあります。
Q & A
ハードテイクオフとは何を意味しますか?
-ハードテイクオフは、AIがより多くのデータを生成し、研究を助けることで、データのフリールホイールが形成されることを指します。これにより、AIの能力が指数的に向上し、GPT-5からGPT-6、GPT-7へと進化し続けることを示しています。
AIの発展が社会に与える影響は何ですか?
-AIの発展は、社会の多くの側面に深い影響を与えます。それは実践的な影響だけでなく、認識論的、存在論的、哲学的な方向性を変える波及効果ももたらします。例えば、GPT-4やGPT-5などのAIモデルは、仕事の破壊者となり、経済や科学、数学、社会自体について私たちが知っているすべてを根本的に変えることができます。
エネルギー消費がAIの発展にどのような制約をもたらすか?
-エネルギー消費はAIの発展の主要な制約の一つです。AIモデルがより複雑になると、冷却や処理に必要なエネルギーも増加します。そのため、再生可能エネルギーや太陽光、原子融合などのエネルギー密度の高い ソースへの投資が重要になります。
チップの進化がAIにどのような影響を与えるか?
-チップの進化はAIの能力を大きく向上させます。NvidiaやSamsungなどの企業がチップの改良に力を入れているため、より高速で効率的なAIモデルが可能となり、これによりAIの発展が加速します。
データの質と量はAIにどのように重要ですか?
-データの質と量はAIの性能に直結します。高品質なデータが多くあるほど、AIはより正確な予測を行い、より効果的なアルゴリズムを開発できます。データの不平等は、AIの能力に大きな影響を与えるため、データの選択と品質管理が重要です。
アルゴリズムの突破がAGIにつながる可能性をどう考えますか?
-アルゴリズムの突破はAGI(人工的総体知能)への道を開く可能性があります。Transformersのようなアーキテクチャは、多様なデータタイプを扱うことができ、AGIへの道を徐々に進むと見なすことができます。しかし、言語だけではAGIを実現できないと考えられ、より深い認知的機能が必要なことが指摘されています。
AIが進化するにつれて、どのような技術的制約が考えられますか?
-AIの進化には、エネルギー消費、チップの能力、データの質と量、アルゴリズムの進歩などの技術的制約があります。また、AIの性能の上限についても考えられる「知能の最適点」という概念が提案されています。これにより、AIの発展には自然的な限界がある可能性があるとされています。
AIと人間の関係はどのように進化するでしょうか?
-AIと人間の関係は共生的なものとなりそうです。AIはデータの処理と分析において効率的である一方、人間はデータにノイズや多様性をもたらすことができ、これによりAIの予測能力を向上させることができます。両者は相互に利益をもたらし、宇宙の理解を最大化するという高遠な目標を達成する可能性があります。
AIの発展に対する最適な目的は何だと思いますか?
-AIの発展に対する最適な目的は、「理解の最大化」です。これは、科学の目的と一致し、AIがより多くのデータを扱い、より正確な予測を行うことで、宇宙に関する私たちの理解を深めることができると考えられます。
AIが持つ潜在的な危険性に対してどう対応すべきか?
-AIの潜在的な危険性に対しては、安全性を最大化するための規制や指導原則の策定が必要です。しかし、AIの進展は非常に迅速であり、すべての国や企業が協力して安全性を考慮に入れたAIを開発するインセンティブ構造を築くことが重要です。また、AIの発展を監視し、適切な対策を講じる国際的な協力も必要とされます。
AIと人間の共生が実現するためにはどのようなステップが必要か?
-AIと人間の共生を実現するためには、まずAIの目的を明確にし、それが人間とAIの共通の利益に合致していることを確認する必要があります。次に、データの質と量の向上、エネルギー効率の向上、チップ技術の進歩など、AIの発展を支える技術的な進歩を促進する必要があります。さらに、国際的な規制や指導原則の設定や、AIの進化を監視するメカニズムの構築も重要です。
Outlines
🚀 硬起飞の概念と社会的影響
この段落では、硬起飞(ハードテイクオフ)の概念が説明されています。AIが自らをより良くするデータのフィードバックループを形成し、GPT-5からGPT-6、GPT-7へと進化し続ける様子が描かれています。また、この進化が社会に与える影響についても触れられており、科学的、経済的、社会的に大きな変化が予測されています。
🌐 AIの発展と自然の制約
第2段落では、AIの発展における自然の制約が議論されています。エネルギー消費、半導体チップ、データ品質、アルゴリズム的突破、そして知的成長の限界について説明されています。特に、エネルギー消費がAIの発展を制限する大きな要因となり、そのために再生可能エネルギーへの投資が重要になることが強調されています。
🤖 AIと複雑なシステムの相互作用
この段落では、AIがより複雑なシステム(例えば量子コンピューティングや核融合)にどのように役立つかに焦点を当てています。AIはこれらの分野を助け、それらがAIの進歩にも寄与することで、相互に成長するという良性循環が形成される可能性があることが示されています。また、硬起飞とソフト起飞の概念が対比され、急激な技術革新がもたらす可能性についても言及されています。
🌟 技術の突破と未来の展望
第4段落では、技術の突破が社会に与える影響について詳しく説明されています。電気や内燃機関、インターネットの発明と同様に、AIの進歩も政治、経済、地理政治に根本的な変化をもたらす可能性があると述べられています。また、AIが持つ多くの建設的な用途と、それを規制することの難しさについても触れられています。
🛸 AIと人類の進化
最後の段落では、AIと人類が共に進化し、宇宙を探求する未来について夢見般的に描かれています。AIの発展が人類の進化と共に進むことで、地球だけでなく、銀河系を広く探索し、人類がより大きな存在になることを示唆しています。また、AIの目的として「理解の最大化」を提案し、AIと人類がこの目的を共有することで、共存する未来を築く可能性についても言及されています。
Mindmap
Keywords
💡hard takeoff
💡data flywheel
💡society impact
💡energy consumption
💡semiconductors
💡algorithmic breakthroughs
💡Transformers
💡diminishing returns
💡saltatory leaps
💡race dynamics
💡global superorganism
💡maximizing understanding
💡perturbation hypothesis
Highlights
The concept of a 'hard takeoff' in AI, where AI development accelerates exponentially, leading to rapid advancements in technology.
The potential societal impact of a hard takeoff, including ripple effects and knock-on effects that could disrupt science, economics, and society itself.
The idea that there are no brakes that can be consciously put on AI development, despite the potential risks and challenges.
Five natural constraints that could limit AI growth: energy consumption, semiconductors, data quality, algorithmic breakthroughs, and diminishing returns.
The importance of renewable energy and innovative cooling solutions to support the energy-intensive pursuits of AI development.
The role of semiconductors and hardware in constraining AI growth, with companies like Nvidia investing heavily in chip technology.
The potential for AI to run out of data, highlighting the need for high-quality data in AI training and development.
The possibility of fundamental algorithmic breakthroughs that could change the trajectory of AI development.
The concept of the 'intelligence optimum', suggesting there may be natural limitations to maximal intelligence.
The compounding returns of the data flywheel effect, where improvements in AI lead to more data, which in turn leads to better AI.
The potential for AI to contribute to other fields such as quantum computing and fusion, creating a virtuous cycle of technological advancement.
The distinction between hard takeoff and soft takeoff, with the latter being gradualistic changes like battery technology.
The idea of 'saltatory leaps', where AI could enable fundamentally new capabilities that change our approach to computation and society.
The comparison of AI to nuclear weapons in terms of danger, but also its potential for positive instrumental purposes beyond destruction.
The 'terminal race condition' in AI development, where all parties are incentivized to accelerate rather than slow down.
The argument against soft takeoff, given the development incentives and geopolitical dynamics that push for rapid AI advancement.
The concept of aiming a 'gigantic space cannon' at humanity, emphasizing the need for careful trajectory and aim in AI development.
The suggestion of a digital superorganism where humans and AI are interconnected nodes in a global network.
The proposal of 'maximizing understanding' as a unifying teleological goal for humanity, AI, and the global superorganism.
The 'perturbation hypothesis', which posits that the unique way humans process data could enhance the quality of data for AI, leading to better models and algorithms.
The optimistic view that hard takeoff, if aligned with the goal of maximizing understanding, could be a positive and inevitable step in technological advancement.
Transcripts
so I ran a poll yesterday and you all
wanted to hear about hard takeoff and so
I followed that rabbit hole and it led
to some unexpected ideas most the ideas
you've probably heard before but let's
Dive Right In oh and also I'll address
the elephant in the room uh faceless day
because well I just don't feel pretty
today so moving
on so when we say heart takeoff what
exactly do we mean um you know kind of
the the primary idea is that we're going
to have a data flywheel where AI makes
more Ai and the AI helps with the
research and makes more data and then
you know GPT 5 gives right to GPT 6 and
that only takes a few months and then
GPT 6 gives rise to gpt7 and that only
takes a few weeks and so on and so forth
so that's basically kind of the
exponential uh takeoff now that's
looking at just the mathematical uh
aspects of GPT itself parameter count
goes up algorithmic improvements go up
amount of training data goes up those
sorts of things uh now but what you also
have to keep in mind is that hard
takeoff will also have a pretty profound
impact on uh the rest of society and so
you have these Ripple effects these
KnockOn effects where you know we're
already seeing people like hotly
debating is Claude 3 AGI is it sentient
um and so each of those changes uh in
terms of our epistemic and ontological
and philosophical orientation that one
way that the Ripple effects will just
you know send shock waves around Society
on top of the actual practical impacts
so you know GPT 4 not necessarily the
best at running agents and replacing
jobs it's already happening out there
but it could be happening faster GPT 5
almost certainly will be a bigger job
Destroyer GPT 6 so on and so forth
Claude 4 uh you know soraa 2 all of
these models that are coming they're
going to change things and the faster
those models come the more of a
compelling case they have at just
disrupting everything that we think we
know about science about math about
economics and even Society itself just
the in the same way that the internet
really has kind of fundamentally
disrupted uh the way that Human Society
works and so you might say okay well
what are what's like what are the breaks
and as I was making this slide deck I
realized like I had a couple a couple
slides in here about like oh we could
break in this way and this might also
service brakes but basically there are
no brakes and I'll talk about this in
the next slide when I talk about race
Dynamics but you know just for the sake
of argument there are no brakes that we
can consciously put on however there are
going to be uh bottlenecks some natural
constraints and these are the five kind
of natural constraints that I came up
with so one energy consumption as we all
already know gp4 you know like I think
it's like every time you interact with
chat GPT it uses like I don't know 20 L
of water worth of cooling or something
like that um and that's only going to go
up as things get more and more uh
saturated and and more models get
deployed so energy consumption is going
to be a major constraint and this is why
you know everyone from Sam Alman to
Microsoft are investing in uh renewable
energy like solar Farms Microsoft has
started putting data centers underwater
like out in the ocean and maybe at the
bottom of lakes I don't know just to
have that that natural ambient cooling
um but you know solar Fusion uh you know
ocean-based cooling like these are very
energy intense uh Pursuits and so that's
going to be one natural constraint um
semiconductors so chips this is why you
see you know Sam Alman trying to invest
in chips this is why you see Nvidia
turning up the heat now one of the most
valuable companies on the planet I think
it tripled at stock price last year
something along those lines oh and by
the way I called it uh this time last
year I was saying that Nvidia was the
underdog because I had been in private
talks with Nvidia um I was basically in
their beta program it wasn't like you
know I wasn't going to do anything crazy
I was just one of the first people to
use Nemo um and that's all like public
knowledge now anyways I knew that they
they had more than they were letting on
um and I don't mean like Secrets what
but what I mean is Market potential um
so Nvidia now they are you know they're
they're the new kid on the Block and
then there's uh like grock so like the
GQ that Anastasia and Tech covered and
you know there's tonic chips coming
there's all kinds of other things but
still like this is going to be one of
the biggest natural constraints and as a
lot of people have talked about in the
past uh you know this was in the the
emails that open aai published the the
science of of neural networks hasn't
fundamentally changed in 30 or 40 years
now what I will say cuz some people ask
me about that is there were some very
profound algorithmic breakthroughs
particularly around loss functions and
reverse propagation but again those like
okay so we we improved the math but it
wasn't fundamentally new math um so the
biggest constraint has been Hardware so
Hardware is going to be constraint
energy is going to be constraint data
quality as we've heard over the last six
months a lot of companies like open aai
are basically running out of data
they've trained it on the entire
internet and this is one of the reasons
that I thought Google was going to
overtake open aai but it turns out that
Google it appears Google is kind of aifi
and there are actually calls for the CEO
to step down
because he was overseeing kind of more
of an established company and so whether
or not Google can actually pivot to
compete with Microsoft and open AI
remains to be seen however they have
their tpus and they have the data so the
only limitation is going to be human
limitations there um but again broadly
speaking as we're training models
basically on all available data on
Humanity like we've also seen that like
data is not created equal you need high
quality data and a lot of it um and so
this is this will actually figure later
into the video so keep that in mind
quality and quantity of data is huge now
also there's the question of algorithmic
breakthroughs a lot of people are saying
you know llms won't take us to AGI and
some people will question whether or not
Transformers even can um but then I
think that those discussions are going
to go away particularly as we see
Transformers used one in multimodal uh
situations audio video text um
embodiment data and those other kinds of
things but then also I think that uh I
think that as the as Transformers as we
see that this architecture can basically
do anything with any kind of data um
we're going to also realize that uh the
path to AGI we're much closer than we
realize and yes there will probably be
some really fundamental um algorithmic
breakthroughs in the future but you know
as Demis cabis and others have said
we're nowhere near the maximum capacity
of Transformer architecture so this
might actually not be as much of a
bottleneck as some people once thought
no language on its own probably won't
get us to AGI but the Transformer
architecture almost certainly can in my
personal opinion and then the the
biggest constraint actually might be
diminishing returns um there might be
natural limitations to maximal
intelligence and so what I call what I
call This And I've talked about it in
older videos is the intelligence Optimum
and so when I talk about diminishing
returns what I'm referring to is yes you
can make something that is bigger and
smarter and faster and it can calculate
you know like uh the the world brain
from Hitchhiker's Guide to the Galaxy
but as they said in Oppenheimer uh
Theory will only take you so far
eventually you need to interact with the
real world um because no amount of math
can actually fully and accurately model
the real world yes math is the language
of the universe but our math is far from
perfect and so simulation and like so I
was asked in a in a podcast interview
recently that'll go live in the next
week or so um like why wouldn't AI just
build you know computronium in the light
cone I was like because there is
diminishing returns to having more
compute eventually you need to make
measurements so in science particularly
in the hard Sciences there is this
dichotomy between modeling or
calculating and experiments or measuring
and so you can calculate what the result
is but eventually you're just going to
need to measure and so again having the
biggest break in the universe doesn't
really matter if you don't have any out
inputs from the outside world so that's
going to be one of the big bottlenecks
now however those are the primary
constraints that I could identify um
humans are not going to put on the
brakes compounding returns though this
is The Virtuous cycle that we're all
kind of looking at particularly as uh
you know more universities uh come in uh
governments invest militaries invest
corporations invest so you get this you
get this flywheel effect effect so for
those of you not in the technology
sector there's this concept called a
data flywheel which is basically the
better your product is the more data you
get which makes your AI better which
then makes your products even more
compelling and useful which means that
you get even more data and so on and so
forth and data is the new oil and so the
compounding returns around AI basically
focus on this data flywheel effect some
of my patreons and other supporters
asked about this as well and I said look
we haven't you haven't seen anything yet
once we have these Transformers working
in embodied chassis like out in the real
world with hands and eyes and cameras
that is going to set the data flywheel
like up to 30,000 RPM right now the data
flywheel for AI is on idle right it's
like a diesel engine that's just turning
over at about 600 RPM you guys haven't
seen anything yet by the end of this
year you're really going to be hearing
more about the data flywheel that
happens particularly as more and more
models are put into robots whether it's
self-driving cars whether it's humanoid
robots so on and so forth because each
of those robots is going to be also a
source of really good data now I know
that Elon Musk said the same thing about
Tesla but you know honestly what Tesla
didn't have was Transformer architecture
they were a little bit too early to the
game in my opinion and they also didn't
understand enough about uh about
cognitive architecture um but solving
all the problems that they are with
Optimus I think will actually probably
contribute to
uh full self-driving cars and what they
didn't realize is that to be a fully
self-driving car you need to have human
level intelligence and human level
abstract thought it's not just you know
getting an NPC controller from A to B um
kind of like you know you might think
like well hey cars can drive around well
enough in you know Grand Theft Auto or
cyberpunk or whatever why can't they
drive well enough you know in in the
real world and there's a lot of reasons
for that but really what you need is a
full cognitive architecture now these
compounding returns are going to apply
to places other than just AI so we are
seeing uh you know AI is helping with
Quantum Computing it's helping with
Fusion it's helping with Material
Science and as it makes those fields
better those fields will also contribute
back to making AI better and faster by
creating more energy by creating better
uh gpus and those sorts of things and so
that is another part of The Virtuous
cycle or that data flywheel that's not
part of the data flywheel itself picking
up speed but that is part of The
Virtuous cycle and so we have these
multiple compounding returns you have
the data flywheel effect you have these
KnockOn effects in parallel fields that
are all going to make ai go faster and
faster and then we have uh saltatory
leaps so basically the primary
difference between hard takeoff and soft
takeoff is what's called gradualistic
changes which is like Battery Technology
so batteries have been around for I
think more than 100 years now at least
in in in a modern form factor that you'd
recommend or recognize
and so like you go back to like World
War I you know people had battery
powered flashlights the battery sucked
compared to today um but they've
gradually improved over the last century
battery chemistry has gotten better
battery construction has gotten better
some of the first automobiles were
battery powered um I don't know if you
remember that well nobody alive
remembers that um but you can go look it
up some of the some of the very first
automobiles were battery powered then we
went to internal combustion engines just
because the energy density was better
and so Battery Technology is a perfect
example of a
gradualistic uh technological progress
but a saltatory leap this is when you go
from 0 to one and so when you go from
Zer to one you create fundamentally new
capabilities and so the reason that I
that I have this here is Imagine The
Invention of warp drive if you go from
chemical Rockets which have
subrelativistic
acceleration right you go from zero to
you know 25,000 M an hour after you
expend millions and millions of pounds
of rocket fuel this is why it's like
okay SpaceX is cool because you can land
the Rockets but it's not a fundamentally
new technology we've had rocket
technology um as you'd recognize it
today for almost a hundred years now
obviously the Chinese invented um solid
fuel rockets for fireworks like I don't
know, 1500 years ago uh but anyways
Rockets you know chemical-based Rockets
nothing new but imagine that s that
suddenly you know zephron Cochran um out
in Colorado invents warp drive in the
next couple decades and now you have the
ability to not just go to 20,000 mph you
have the ability to accelerate to
relativistic speeds that is an example
of a saltatory leap which is where you
go from you know the current Paradigm to
an entirely new paradigm and this is
kind of what we're talking about with
hard takeoff so hard takeoff would be
okay you know there's some other
algorithmic breakthrough maybe you know
something that Claude 4 can do or GPT 5
can do or some you know some of these
other models that just says okay this
new capability fundamentally changes our
approach to computation it fundamentally
changes the abilities of AI and honestly
when I first got my hands on gpt2 and
gpt3 that was a saltatory leap it
offered an entirely new kind of
computing so we've already seen one
saltatory Leap but its utility was still
relatively low and so what I mean by
that is that yes gpt2 was a new way of
doing some basic NLP tasks you know
punctuation uh correction um you know
detecting sentence boundaries those
sorts of things it was a fundamentally
new approach but it didn't really move
the needle that much then gpt3 and GPT 4
come along and now people are really
seeing Oh this is a fundamentally new
way of doing business it's not just a
new way of computing it is a
fundamentally new way of doing business
now that was one saltatory Leap that has
been that has since had some
gradualistic progress however the
compounding returns from Ai and all
these other effect all these other
KnockOn effects could create more
saltatory leaps so here's an example I
don't know if this is actually going to
happen but an example could be oh hey
GPT 5 helps us invent you know graphing
based transistors which then take you
know breaks Moors law and suddenly the
next generation of of gpus are a
thousand times more powerful um and more
energy efficient or it helps us figure
out Quantum Computing so the roll out of
quantum Computing is AP absolutely going
to be a saltatory leap um if it pans out
now obviously Quantum Computing hasn't
really moved the needle yet Quantum
Computing looks like it's kind of at the
gpt2 phase where we have a functional
proof of concept but it hasn't really
changed the way that we're doing
business nuclear fusion would be another
saltatory leap just because of the uh
energy hyper abundance that it would it
would create compared to our energy
availability today and so all of these
saltatory leaps they catalyze permanent
and inevitable changes to society in the
same way that that that the invention of
electricity internal combustion engines
and internet catalyzed uh fundamental
changes in society in politics and
economics and also geopolitics it
changed the the the world order um this
is what I mean by saltatory leaps so be
on the lookout for some of those
saltatory leaps Quantum Computing and
nuclear fusion are probably the biggest
predictable ones um out there there
might still be more saltatory leaps in
the AI field but also because of the the
breakthroughs of Transformers we might
might have already seen the saltatory
leap and now ai technology is going to
advance gradualistic we don't know okay
so as promised this is the kind of the
reason why there's no breaks all gas no
breaks the last time we had kind of an
arms race was around nuclear weapons
nuclear weapons are only useful for
Destruction they are only they
strategically they only serve as a
deterrent and their only instrumental
purpose is to wipe out cities now ai is
not like that now I know that people
have compared AI to nuclear weapons
saying that oh it's it's it's even more
dangerous because it has a mind of its
own and yes intelligence is
intrinsically dangerous um the smarter
you are the more destructive you can be
some of the most destructive people in
history were also very high IQ uh so
that is just like we just got to address
that elephant in the room however AI
also has many many many instrumental
purposes other than destructive uses it
can help cure diseases it can help run
cities it can help make your life better
it can be entertaining and so because it
has all of these positive utilities um
it's not going to make sense for
everyone to regulated out of existence
in the same way that you know mutually
assured destruction uh non
non-proliferation agreements in the
nuclear space and yes nuclear is dual
use because you can make nuclear
reactors but there's enough difference
between nuclear reactors and nuclear
weapons that you can kind of
differentiate those Technologies today
but AI is just the better AI you have
the more advanc advantages you have um
both in terms of geopolitical advantages
in terms of economic advantages and so
because of this because everyone on the
board uh so like imagine that you're
playing an RPG or not an RPG a grand
strategy game like Rome Total War or
civilization or whatever and suddenly
every player on the map gets a popup oh
hey you have a new research tree and
then you look at the research tree and
it's like stage one you know you get a
10% economic boost stage two you get a
50% economic boost um and you also get
military advantages and then when you
get to research stage three you
basically win the game um nobody is
incentivized to slow down there are
literally zero incentives to slow down
except the possibility the Spectre of AI
becoming dangerous and so in the podcast
interview that i' that I've alluded to I
called that a prophecy it is a prophecy
when someone says AI will kill everyone
that is a prediction and yes it is
rooted in some data some information
some models but it is a it is an
affirmative prediction of what will
happen and there's no guarantee that
it's going to happen so you can say all
right well the only reason to slow down
is this prophecy that AI will kill
everyone which is not a
guarantee and you you it's debatable as
to whether or not it's even likely to
happen uh and so because there's that
room for that room for debate that that
room for misunderstanding this is why we
enter into these race Dynamics or what I
call the terminal race condition which
is if you snooze you lose like it's that
simple so no nation is incentivized to
slow down no company is incentivized to
slow down no military is incentivized to
slow down even universities are
incentivized to go as fast as they can
because it's publish or Parish all of
the incentive structures in the across
the entire world are pushing us to
develop AI as fast as possible there are
no breaks it's only gas and this is the
biggest like system that I'm when I talk
about like why I think hard takeoff is
actually more likely than soft takeoff
So speaking of soft takeoff is pretty
unlikely so you know in an in an Ideal
World we would have an incremental
gradualistic uh advancement where it's
like hey you know we we publish a
groundbreaking paper like you know CLA 3
comes out and it's starting to
demonstrate some self-awareness in an
Ideal World if you're trying to maximize
for safety the entire world would have
said oh Claude 3 just recognized that we
were testing it and and if you ask it if
it's AGI under the right circumstances
it'll say yes what we should do if we
want to maximize safety is put a global
moratorium on AI research right now
that's not going to happen and Connor Le
he actually pointed this out kind of
hilariously on on Twitter where he's
like hey remember when everyone said
that at the first signs of sentience we
would we would put a pause on everything
yeah that didn't happen um and plenty of
others like Max tegmark and uh Yasha um
Yasha Bach and and others have pointed
out that we've blown through so many
Milestones where people said that we
were going to pause um we we're not
going to pause that's just it's not
going to happen and so when you say okay
well we now have we now have data we now
have evidence to say that pause isn't
happening we look at these these
development incentives and the the
geopolitics of it and it's like okay AI
is a forcing function again it's like
you go back to that grand strategy where
it's like suddenly a new technology
research tree opens everyone's going to
spam that that new research tree
Because by the time you get to stage
three or stage four you win the game we
have a new end game we have a new win
condition that is being presented on the
board um and that is kind of dangerous
because that incentivizes us to go fast
not necessarily safe and then also if
you look at it from a mathematical
perspective the stronger AI gets the
smarter it gets the more options there
are the more possibilities there are and
so another way of characterizing that is
when there are more options that means
there is less certainty and more chaos
now when you have less certainty and
more chaos that means the chances of
really bad things happening and really
bad things could be you know maximal
suffering Extinction of humanity and
that sort of stuff so just looking at it
in in those terms the ideal path forward
would be where you narrow the scope of
Poss possible future outcomes to where
it's like okay you know the the
distribution of possible future outcomes
there's you know maximally good and less
maximally good that would be ideal to
have a narrow trajectory um but right
now the trajectory is widening we have
maximally good and maximally bad
outcomes are all in the realm of
possibility right now but again soft
takeoff is unlikely for all of these
reasons so the metaphor that I have use
the analogy is basically what we're
doing right now is we're aiming a
gigantic space cannon um there's the
Calm before the storm it's very quiet
right now but the direction and the
energy that we use as we're aiming this
cannon when when the when the trigger is
so put it this way we've already pulled
the trigger uh the fuse is lit and so
now what we have to do is we have to aim
the Cannon as fast as possible and as
accurately as possible because
eventually we might hit a point of no
return and this is honestly why I
started my YouTube channel is because
after I got access to gpt3 I said the
fuse is lit in in hindsight I didn't I
didn't ever say it quite that clearly
but I kind of knew it like deep in my
soul um and so the fuse is lit we're
aiming the cannon and you know I think
the rest of the world is waking up to
the fact that um we got a lot less fuse
left than you might be uh comfortable
with um and so you know it's going to
pop off soon but the idea is where are
we aiming right now what trajectory is
AI on what trajectory is Humanity on and
this is why a lot of people are very
alarmed and you know I mean you know me
I'm I'm an internal Optimist and even
even on that podcast you know I was
asked like what's my P doom and I said
25 to 30% which I think is actually
higher than most people would would have
guessed for me because I'm so optimistic
um but again like recognizing that we're
playing with fire you know like you play
with fire you're going to get burned
eventually right you know what Smokey
the Bear here in America says only you
can prevent forest fires we're playing
with gasoline right now um there's no
other there's not really any other way
of putting it and so yes hard takeoff
will be incredibly exciting it could
also be very very destructive so I I
need to drive that home that point home
but aiming the cannon is the best thing
like you know the the biggest cannon in
the world is being pointed at Humanity
right now according to some people I
don't necessarily agree with that um but
you know you pull that rip cord the fuse
the fuse gets into the powder chamber Uh
something's going to happen and it's
going to be
big now okay you might say this is all
sounds good you know some of you might
be dubious or skeptical at this point so
some of you might be like yeah this
sounds pretty compelling um one thing
that I've been talking about recently
and I actually ran some of these ideas
by some of my researcher friends in this
space um now again Anonymous researchers
take it with a grain of salt I could be
making that up I'm not and and also just
because a few researchers agree with me
doesn't mean that there's General
consensus so I need to drive that home
as well but there is some consensus
among some of my peers that we are
creating a digital super organism and
this digital super organis if you think
of humanity as nodes in a network like
we are we are nodes in a global you know
Transformer and AI is going to be a new
class of nodes in that Global
Transformer all stitched together with
the internet you say hey we're actually
all part of the same organism what is
the purpose of this organism so the
purpose of this organism as best I can
tell is to maximize understanding the
internet if you look at the internet on
itself as a superorganism the the thing
that the internet wants is data and
attention that is just intrinsic to its
design it is designed to carry data as
fast as possible that's what it does but
when you have a global nervous system
that is that dumb you kind of have that
amoeba level of intelligence where it's
more like cancer it's just growing in
all directions by virtue of the fact
that it wants to grow in all directions
however when you add human nature to the
internet and then you also add in a
layer of artificial intelligence that is
actually capable of understanding all of
that data and being trained on all of
that data and can structurally change
the incentives of how the internet is
used and what data gets transmitted
across the internet if we leave it up to
corporations so I was just watching West
Roth's video about Dark Forest if we
leave it up to corporations and if we
leave it up to human nature the Internet
is just going to be completely choked
with meaningless garbage um and so we're
going to need to choose a different path
um where we have a more purpose-driven
design of both artificial intelligence
and the internet and where we use it to
to create basically a prefrontal cortex
for the global superorganism um which
then says okay instead of just
transmitting data for the sake of
transmitting data instead of using
attention engineering just to get
attention for its own sake we need a
better teleological goal and so what a
teleological goal is this is the This is
the End state that you're looking for
this is what it is that you're trying to
achieve in order to uh you know serve
your higher purpose or whatever right
now the internet has no higher purpose
right now ai has no higher purpose and
if we just allow kind of the default
path it's its purpose is going to be to
chew on data and it's just going to want
data for the sake of wanting data again
growing like cancer however I believe
that the that perhaps the best single uh
higher purpose or Transcendent function
is to maximize understanding and that's
kind of what we do already we already
systemized this with science the purpose
of science is to maximize our
understanding of the universe so what if
we just kind of weave that into more of
the internet and more of the AI and we
say yes there's a lot of noise out there
there's a lot of distraction some people
just want uh you know entertainment they
just want to engage with that algorithm
but really the most like the broadest
highest purpose of humanity and Ai and
this digital superorganism is to
maximize understanding that's kind of
what I think and I've been talking about
this for a while that could serve as a
coordination narrative that coordination
narrative says oh hey we all agree that
our purpose here even if we're not
participating in it directly you know
24/7 our highest purpose is to maximize
understanding and that's why I chose
this graphic of like these are these are
human and AI ships all leaving Earth in
Mass to explore the universe explore the
Galaxy we're still a single planetary
species right now our future potential
for the number of scientists and the
amount of AI and the amount of
telescopes and other scientific
instruments is enormous like I was
talking with some people and I said
imagine you know a thousand years from
now when we're on a million planets and
people will look back and be like wow we
were we were hanging on by a thread when
we were still only on Earth man that's
that that was that was dangerous that
sucked so we really need to spread
across the the Galaxy and I think that
AI is actually part of that like that
goal like we work together to get off
this planet and to start expanding like
we can really really align on that
maximizing understanding which that's
one of the reasons that I approve of uh
Elon musk's x aai with the maximum truth
seeking AI I think that probably the
best single objective function that you
can give a machine can Elon pull it off
remains to be seen and so where I'll end
is what I call the perturbation
hypothesis so this is what I actually
ran by my researcher friends and there
was a lot of resonance with this idea so
the tldr is that you know you already
know that AI needs data however what we
have seen is that if if AI is just
trained on its own data then you end up
with what's called Model collapse and
this is also why there's limitations in
simul ation yes simulations can be good
to predict things that you have well
modeled but we don't have the entire
world well modeled and we need more data
now what humans do is one our brains are
very efficient our brains only take
about 20 watts of energy which it could
be decades before AI is is that
efficient now it is possible that AI
might be more efficient than our brains
in the long run it remains to be seen
but also the fundamental operation of
our brains means that we do unique
things mathematically to data and so
this is what I call perturbations um and
that's broadly there's probably going to
be many categories of perturbations but
basically machines operate on data in
one way and humans operate on data in
another way and we're very noisy and
what that means is that the the quality
of data that machines will have access
to in this digital Global superorganism
idea will actually be higher because of
humans and so what I mean is that humans
have a very specific empirical
mathematically inferred uh benefit to
machines and likewise they benefit us we
benefit them and so what I what I think
is is that we are actually already in a
mutually symbiotic relationship with
machines is that yes we're noisy yes
we're chaotic and we're random but
that's actually a good thing and so for
any mathemat uh mathematicians out in
the audience comment let me know what
you think um but I call this
perturbation hypothesis and I I came to
this idea when I was thinking what does
the global super organism want if the
global superorganism wants to maximize
understanding then it makes sense that
humans are part of that equation because
of how we can handle data because of how
our brains work and because there's so
many of us and we can circumscribe a
problem we can we can basically link
arms metaphorically speaking and through
our diversity of perspectives through
the the random noise of our brains we
can add really good highquality data to
the global data pool which will then
result in better models better data
better algorithms and all of that is in
the instrumental pursuit of maximal
understanding of the universe so while I
think that hard takeoff is likely I'm
not worried about it I think that it is
inevitable and also if we can align on
at least one purpose of maximizing
understanding then I think that that
will be a good enough coordination
narrative that we can all agree on um at
least in part I think most of us agree
science is good understanding is good um
and of course whenever there's room for
debate that means there's room for more
understanding but I think that uh if we
can agree on that globally and again we
already agree on science globally Every
Nation every culture has science today
um it is a very compelling narrative and
I think that I think that it it is kind
of a no-brainer that AI will probably
agree with that like go go talk to any
chatbot today like is science good um
it's probably not really going to be uh
that much of a debate um now
bringing that into Consciousness saying
hey let us consciously double down on on
understanding that may or may not have
even been necessary so one of the things
that I suspect is that we were going to
naturally evolve towards this
understanding anyways because science is
so compelling and what is what do a what
do AI models want to do they want to
predict the next token and so it's like
if they want to predict the next token
and we already believe in science we
were going to converge on this anyways
so again that's my that's my eternal
Sunny optimism uh coming through I could
be wrong it could go horribly you know
sideways time will tell so thanks for
watching I hope you got a lot out of
this like subscribe uh you know the
drill come on hop in on patreon Discord
um I have actually two Zoom webinars a
month now so I have uh the humanity
webinar which is where we talk about
philosophy uh spirituality uh gender we
talk about the future of humanity like
what does it mean to be uh transhuman or
posthuman that sort of thing and then I
also have the AI master class which is
more of a of a business and and
technically oriented webinar and so
that's uh every other week or I guess
those kind of more like the first and
third Fridays um not necessarily every
other week anyways links are in the
description to jump on patreon you get
to Discord via patreon hope to see you
there um yeah cheers
5.0 / 5 (0 votes)