Hard Takeoff Inevitable? Causes, Constraints, Race Conditions - ALL GAS, NO BRAKES! (AI, AGI, ASI!)

David Shapiro
8 Mar 202434:17

Summary

TLDR動画スクリプトでは、AIの発展とその社会的影響について議論されています。特に、データの指数関数的な増加と、それが社会に及ぼす波及効果に焦点を当てています。AIの進化は、計算能力、アルゴリズムの改善、トレーニングデータの増大を指摘し、社会全体に対する衝撃を強調しています。また、AIの発展を遅めることはできないが、エネルギー消費、半導体、データ品質、アルゴリズム的突破、そして知的的可能性の限界といった自然制約があると述べています。さらに、AIと人間の共生関係が、宇宙の理解を最大化するための目的を持つ可能性があると提案しています。

Takeaways

  • 🚀 AIの発展は指数関数的に加速しており、GPTの各バージョンのリリース間隔は次第に短くなっています。
  • 🌗 AIの進化は社会全体に深い影響を与え、知識、存在論、哲学的見地を変える可能性があります。
  • 🔌 AIの硬直起動(Hard Takeoff)は、データの消費やエネルギー消費などの自然制約を考慮しなければなりません。
  • 💽 データの質と量はAIの性能に直結しており、高品質なデータが必要です。
  • 🌐 AI技術は他の分野(量子コンピューティングや材料科学)にも貢献し、相互に成長しています。
  • 🚦 硬直起動とソフト起動(Gradualistic Changes)の比較において、ソフト起動は电池技術の進歩に例えられています。
  • 🌟 突発的な技術革新(Saltatory Leaps)は、宙飛行技術や量子コンピューティングの開発など、一歩到位の進化を意味します。
  • 📈 AIの発展はデータフライホイール効果を通じて加速し、複数の複合的なリターンがこのプロセスを加速化します。
  • 🏎️ AIの進化は競争的なレースであり、国や企業はAI技術の開発を遅らせることが望ましくありません。
  • 🎯 AIと人類が共に最大化すべき目的は、宇宙の理解を深めることです。
  • 🤖 AIと人間の共存について、私たちは既に相互に利益をもたらす対称的な関係にあります。

Q & A

  • ハードテイクオフとは何を意味しますか?

    -ハードテイクオフは、AIがより多くのデータを生成し、研究を助けることで、データのフリールホイールが形成されることを指します。これにより、AIの能力が指数的に向上し、GPT-5からGPT-6、GPT-7へと進化し続けることを示しています。

  • AIの発展が社会に与える影響は何ですか?

    -AIの発展は、社会の多くの側面に深い影響を与えます。それは実践的な影響だけでなく、認識論的、存在論的、哲学的な方向性を変える波及効果ももたらします。例えば、GPT-4やGPT-5などのAIモデルは、仕事の破壊者となり、経済や科学、数学、社会自体について私たちが知っているすべてを根本的に変えることができます。

  • エネルギー消費がAIの発展にどのような制約をもたらすか?

    -エネルギー消費はAIの発展の主要な制約の一つです。AIモデルがより複雑になると、冷却や処理に必要なエネルギーも増加します。そのため、再生可能エネルギーや太陽光、原子融合などのエネルギー密度の高い ソースへの投資が重要になります。

  • チップの進化がAIにどのような影響を与えるか?

    -チップの進化はAIの能力を大きく向上させます。NvidiaやSamsungなどの企業がチップの改良に力を入れているため、より高速で効率的なAIモデルが可能となり、これによりAIの発展が加速します。

  • データの質と量はAIにどのように重要ですか?

    -データの質と量はAIの性能に直結します。高品質なデータが多くあるほど、AIはより正確な予測を行い、より効果的なアルゴリズムを開発できます。データの不平等は、AIの能力に大きな影響を与えるため、データの選択と品質管理が重要です。

  • アルゴリズムの突破がAGIにつながる可能性をどう考えますか?

    -アルゴリズムの突破はAGI(人工的総体知能)への道を開く可能性があります。Transformersのようなアーキテクチャは、多様なデータタイプを扱うことができ、AGIへの道を徐々に進むと見なすことができます。しかし、言語だけではAGIを実現できないと考えられ、より深い認知的機能が必要なことが指摘されています。

  • AIが進化するにつれて、どのような技術的制約が考えられますか?

    -AIの進化には、エネルギー消費、チップの能力、データの質と量、アルゴリズムの進歩などの技術的制約があります。また、AIの性能の上限についても考えられる「知能の最適点」という概念が提案されています。これにより、AIの発展には自然的な限界がある可能性があるとされています。

  • AIと人間の関係はどのように進化するでしょうか?

    -AIと人間の関係は共生的なものとなりそうです。AIはデータの処理と分析において効率的である一方、人間はデータにノイズや多様性をもたらすことができ、これによりAIの予測能力を向上させることができます。両者は相互に利益をもたらし、宇宙の理解を最大化するという高遠な目標を達成する可能性があります。

  • AIの発展に対する最適な目的は何だと思いますか?

    -AIの発展に対する最適な目的は、「理解の最大化」です。これは、科学の目的と一致し、AIがより多くのデータを扱い、より正確な予測を行うことで、宇宙に関する私たちの理解を深めることができると考えられます。

  • AIが持つ潜在的な危険性に対してどう対応すべきか?

    -AIの潜在的な危険性に対しては、安全性を最大化するための規制や指導原則の策定が必要です。しかし、AIの進展は非常に迅速であり、すべての国や企業が協力して安全性を考慮に入れたAIを開発するインセンティブ構造を築くことが重要です。また、AIの発展を監視し、適切な対策を講じる国際的な協力も必要とされます。

  • AIと人間の共生が実現するためにはどのようなステップが必要か?

    -AIと人間の共生を実現するためには、まずAIの目的を明確にし、それが人間とAIの共通の利益に合致していることを確認する必要があります。次に、データの質と量の向上、エネルギー効率の向上、チップ技術の進歩など、AIの発展を支える技術的な進歩を促進する必要があります。さらに、国際的な規制や指導原則の設定や、AIの進化を監視するメカニズムの構築も重要です。

Outlines

00:00

🚀 硬起飞の概念と社会的影響

この段落では、硬起飞(ハードテイクオフ)の概念が説明されています。AIが自らをより良くするデータのフィードバックループを形成し、GPT-5からGPT-6、GPT-7へと進化し続ける様子が描かれています。また、この進化が社会に与える影響についても触れられており、科学的、経済的、社会的に大きな変化が予測されています。

05:02

🌐 AIの発展と自然の制約

第2段落では、AIの発展における自然の制約が議論されています。エネルギー消費、半導体チップ、データ品質、アルゴリズム的突破、そして知的成長の限界について説明されています。特に、エネルギー消費がAIの発展を制限する大きな要因となり、そのために再生可能エネルギーへの投資が重要になることが強調されています。

10:03

🤖 AIと複雑なシステムの相互作用

この段落では、AIがより複雑なシステム(例えば量子コンピューティングや核融合)にどのように役立つかに焦点を当てています。AIはこれらの分野を助け、それらがAIの進歩にも寄与することで、相互に成長するという良性循環が形成される可能性があることが示されています。また、硬起飞とソフト起飞の概念が対比され、急激な技術革新がもたらす可能性についても言及されています。

15:05

🌟 技術の突破と未来の展望

第4段落では、技術の突破が社会に与える影響について詳しく説明されています。電気や内燃機関、インターネットの発明と同様に、AIの進歩も政治、経済、地理政治に根本的な変化をもたらす可能性があると述べられています。また、AIが持つ多くの建設的な用途と、それを規制することの難しさについても触れられています。

20:05

🛸 AIと人類の進化

最後の段落では、AIと人類が共に進化し、宇宙を探求する未来について夢見般的に描かれています。AIの発展が人類の進化と共に進むことで、地球だけでなく、銀河系を広く探索し、人類がより大きな存在になることを示唆しています。また、AIの目的として「理解の最大化」を提案し、AIと人類がこの目的を共有することで、共存する未来を築く可能性についても言及されています。

Mindmap

Keywords

💡hard takeoff

「ハードテイクオフ」とは、人工知能(AI)が急速に進化し、短期間で人類の理解を超えるという概念です。この動画では、AIのデータフィールウェル効果とその社会的影響について説明し、GPT-5やGPT-6などのAIの進化が急速に進む可能性についても触れています。

💡data flywheel

「データフィールウェル」とは、AIがより良い製品を提供することでより多くのデータを得、それによってAIがさらに改善され、さらに魅力的な製品が生まれるという、相互に增强するサイクルを指します。この動画では、データフィールウェルがAIの発展にどのように影響を与えるかが説明されています。

💡society impact

「社会への影響」とは、AIの進化が人類の社会に与える影響を指します。この動画では、AIが仕事の代替、科学や経済の理解の変革、そして社会自体の構造にどのように影響を与えるかについて議論されています。

💡energy consumption

「エネルギー消費」とは、AIやデータセンターの運用において消費されるエネルギーの量を指します。この動画では、エネルギー消費がAIの発展に直面する自然制約のひとつとなっていることが説明されています。

💡semiconductors

「半導体」とは、電気の導通性を調節できる材料で、現代の電子機器の핵となる部品です。この動画では、半導体がAIの発展における自然制約となる理由として説明されており、NvidiaやSam Altmanが半導体への投資を行っていることが触れられています。

💡algorithmic breakthroughs

「アルゴリズム的突破」とは、計算や問題解決の方法に関する革新的な発展を指します。この動画では、過去数十年間でニューラルネットワークの科学が根本的に変わらなかったという議論があり、最近のアルゴリズム的突破はロス関数や逆伝播などの改善に関連していることが述べられています。

💡Transformers

「トランスフォーマー」とは、自然言語処理などで使用される一種のニューラルネットワークのアーキテクチャです。この動画では、トランスフォーマーが多様なデータ(音声、映像、テキスト)を扱うことができ、AGI(人工一般知能)への道を示していると評価されています。

💡diminishing returns

「収益の逓減」とは、追加の投資や努力によって得られる利益が徐々に少なくなり続けることを指します。この動画では、AIの発展が最終的には収益の逓減に直面する可能性があると述べられており、この現象は「知能の最適値」という概念と関連しています。

💡saltatory leaps

「飛躍的な飛び跳び」とは、技術や知識が急激に進化し、新しい可能性が開かれることを指します。この動画では、AIの発展が急激な進歩を遂げる可能性があると述べられており、それは全新な能力の創出や、既存の概念の根本的な変革をもたらす可能性があるとされています。

💡race dynamics

「競争動態」とは、競争関係にあるグループや国家が相互に競い合い、技術や知識を迅速に進化させるプロセスを指します。この動画では、AIの開発が全世界の競争動態にあるとされ、その結果として、安全性を犠牲にして進歩を遂げる可能性があると述べられています。

💡global superorganism

「グローバルスーパーオルガニズム」とは、地球上の人間とAIがネットワーク化された形で一つの巨大な生命体のような存在とみなすことを指します。この動画では、インターネットがその神経系となり、人間とAIが相互に依存し合い、理解を最大化する目的を追求することが提唱されています。

💡maximizing understanding

「理解の最大化」とは、我々の知識と理解を最大限に伸ばすことを目的とした目標です。この動画では、AIの発展とその社会的影響について議論され、人工知能と人類が協力して宇宙の理解を深めることを提唱しています。

💡perturbation hypothesis

「ペルテーション仮説」とは、人間とAIが相互にノイズをもたらすことで、データの質を向上させ、互いに利益をもたらすというアイデアです。この動画では、人間がAIにペルテーションをもたらすことで、より正確なモデルとアルゴリズムが生まれ、宇宙の理解を最大化する目的に向かって進むことが期待されています。

Highlights

The concept of a 'hard takeoff' in AI, where AI development accelerates exponentially, leading to rapid advancements in technology.

The potential societal impact of a hard takeoff, including ripple effects and knock-on effects that could disrupt science, economics, and society itself.

The idea that there are no brakes that can be consciously put on AI development, despite the potential risks and challenges.

Five natural constraints that could limit AI growth: energy consumption, semiconductors, data quality, algorithmic breakthroughs, and diminishing returns.

The importance of renewable energy and innovative cooling solutions to support the energy-intensive pursuits of AI development.

The role of semiconductors and hardware in constraining AI growth, with companies like Nvidia investing heavily in chip technology.

The potential for AI to run out of data, highlighting the need for high-quality data in AI training and development.

The possibility of fundamental algorithmic breakthroughs that could change the trajectory of AI development.

The concept of the 'intelligence optimum', suggesting there may be natural limitations to maximal intelligence.

The compounding returns of the data flywheel effect, where improvements in AI lead to more data, which in turn leads to better AI.

The potential for AI to contribute to other fields such as quantum computing and fusion, creating a virtuous cycle of technological advancement.

The distinction between hard takeoff and soft takeoff, with the latter being gradualistic changes like battery technology.

The idea of 'saltatory leaps', where AI could enable fundamentally new capabilities that change our approach to computation and society.

The comparison of AI to nuclear weapons in terms of danger, but also its potential for positive instrumental purposes beyond destruction.

The 'terminal race condition' in AI development, where all parties are incentivized to accelerate rather than slow down.

The argument against soft takeoff, given the development incentives and geopolitical dynamics that push for rapid AI advancement.

The concept of aiming a 'gigantic space cannon' at humanity, emphasizing the need for careful trajectory and aim in AI development.

The suggestion of a digital superorganism where humans and AI are interconnected nodes in a global network.

The proposal of 'maximizing understanding' as a unifying teleological goal for humanity, AI, and the global superorganism.

The 'perturbation hypothesis', which posits that the unique way humans process data could enhance the quality of data for AI, leading to better models and algorithms.

The optimistic view that hard takeoff, if aligned with the goal of maximizing understanding, could be a positive and inevitable step in technological advancement.

Transcripts

play00:00

so I ran a poll yesterday and you all

play00:02

wanted to hear about hard takeoff and so

play00:05

I followed that rabbit hole and it led

play00:07

to some unexpected ideas most the ideas

play00:10

you've probably heard before but let's

play00:12

Dive Right In oh and also I'll address

play00:14

the elephant in the room uh faceless day

play00:17

because well I just don't feel pretty

play00:19

today so moving

play00:21

on so when we say heart takeoff what

play00:24

exactly do we mean um you know kind of

play00:27

the the primary idea is that we're going

play00:29

to have a data flywheel where AI makes

play00:31

more Ai and the AI helps with the

play00:33

research and makes more data and then

play00:36

you know GPT 5 gives right to GPT 6 and

play00:39

that only takes a few months and then

play00:40

GPT 6 gives rise to gpt7 and that only

play00:43

takes a few weeks and so on and so forth

play00:45

so that's basically kind of the

play00:47

exponential uh takeoff now that's

play00:50

looking at just the mathematical uh

play00:52

aspects of GPT itself parameter count

play00:55

goes up algorithmic improvements go up

play00:58

amount of training data goes up those

play01:00

sorts of things uh now but what you also

play01:03

have to keep in mind is that hard

play01:05

takeoff will also have a pretty profound

play01:08

impact on uh the rest of society and so

play01:12

you have these Ripple effects these

play01:13

KnockOn effects where you know we're

play01:15

already seeing people like hotly

play01:18

debating is Claude 3 AGI is it sentient

play01:21

um and so each of those changes uh in

play01:25

terms of our epistemic and ontological

play01:27

and philosophical orientation that one

play01:30

way that the Ripple effects will just

play01:31

you know send shock waves around Society

play01:34

on top of the actual practical impacts

play01:37

so you know GPT 4 not necessarily the

play01:40

best at running agents and replacing

play01:42

jobs it's already happening out there

play01:44

but it could be happening faster GPT 5

play01:47

almost certainly will be a bigger job

play01:49

Destroyer GPT 6 so on and so forth

play01:51

Claude 4 uh you know soraa 2 all of

play01:54

these models that are coming they're

play01:56

going to change things and the faster

play01:58

those models come the more of a

play02:00

compelling case they have at just

play02:02

disrupting everything that we think we

play02:04

know about science about math about

play02:06

economics and even Society itself just

play02:09

the in the same way that the internet

play02:11

really has kind of fundamentally

play02:13

disrupted uh the way that Human Society

play02:15

works and so you might say okay well

play02:19

what are what's like what are the breaks

play02:21

and as I was making this slide deck I

play02:23

realized like I had a couple a couple

play02:25

slides in here about like oh we could

play02:27

break in this way and this might also

play02:29

service brakes but basically there are

play02:31

no brakes and I'll talk about this in

play02:33

the next slide when I talk about race

play02:34

Dynamics but you know just for the sake

play02:36

of argument there are no brakes that we

play02:38

can consciously put on however there are

play02:41

going to be uh bottlenecks some natural

play02:44

constraints and these are the five kind

play02:46

of natural constraints that I came up

play02:48

with so one energy consumption as we all

play02:51

already know gp4 you know like I think

play02:54

it's like every time you interact with

play02:56

chat GPT it uses like I don't know 20 L

play02:59

of water worth of cooling or something

play03:00

like that um and that's only going to go

play03:02

up as things get more and more uh

play03:04

saturated and and more models get

play03:07

deployed so energy consumption is going

play03:09

to be a major constraint and this is why

play03:11

you know everyone from Sam Alman to

play03:13

Microsoft are investing in uh renewable

play03:16

energy like solar Farms Microsoft has

play03:18

started putting data centers underwater

play03:20

like out in the ocean and maybe at the

play03:22

bottom of lakes I don't know just to

play03:24

have that that natural ambient cooling

play03:27

um but you know solar Fusion uh you know

play03:30

ocean-based cooling like these are very

play03:33

energy intense uh Pursuits and so that's

play03:37

going to be one natural constraint um

play03:39

semiconductors so chips this is why you

play03:41

see you know Sam Alman trying to invest

play03:43

in chips this is why you see Nvidia

play03:45

turning up the heat now one of the most

play03:47

valuable companies on the planet I think

play03:48

it tripled at stock price last year

play03:50

something along those lines oh and by

play03:52

the way I called it uh this time last

play03:54

year I was saying that Nvidia was the

play03:55

underdog because I had been in private

play03:58

talks with Nvidia um I was basically in

play04:00

their beta program it wasn't like you

play04:02

know I wasn't going to do anything crazy

play04:04

I was just one of the first people to

play04:05

use Nemo um and that's all like public

play04:08

knowledge now anyways I knew that they

play04:11

they had more than they were letting on

play04:13

um and I don't mean like Secrets what

play04:15

but what I mean is Market potential um

play04:17

so Nvidia now they are you know they're

play04:20

they're the new kid on the Block and

play04:22

then there's uh like grock so like the

play04:25

GQ that Anastasia and Tech covered and

play04:29

you know there's tonic chips coming

play04:31

there's all kinds of other things but

play04:32

still like this is going to be one of

play04:34

the biggest natural constraints and as a

play04:37

lot of people have talked about in the

play04:38

past uh you know this was in the the

play04:41

emails that open aai published the the

play04:43

science of of neural networks hasn't

play04:46

fundamentally changed in 30 or 40 years

play04:49

now what I will say cuz some people ask

play04:51

me about that is there were some very

play04:53

profound algorithmic breakthroughs

play04:55

particularly around loss functions and

play04:58

reverse propagation but again those like

play05:02

okay so we we improved the math but it

play05:03

wasn't fundamentally new math um so the

play05:06

biggest constraint has been Hardware so

play05:08

Hardware is going to be constraint

play05:10

energy is going to be constraint data

play05:12

quality as we've heard over the last six

play05:13

months a lot of companies like open aai

play05:16

are basically running out of data

play05:17

they've trained it on the entire

play05:19

internet and this is one of the reasons

play05:20

that I thought Google was going to

play05:21

overtake open aai but it turns out that

play05:24

Google it appears Google is kind of aifi

play05:27

and there are actually calls for the CEO

play05:28

to step down

play05:30

because he was overseeing kind of more

play05:32

of an established company and so whether

play05:35

or not Google can actually pivot to

play05:37

compete with Microsoft and open AI

play05:39

remains to be seen however they have

play05:41

their tpus and they have the data so the

play05:44

only limitation is going to be human

play05:46

limitations there um but again broadly

play05:49

speaking as we're training models

play05:51

basically on all available data on

play05:53

Humanity like we've also seen that like

play05:56

data is not created equal you need high

play05:58

quality data and a lot of it um and so

play06:02

this is this will actually figure later

play06:03

into the video so keep that in mind

play06:05

quality and quantity of data is huge now

play06:08

also there's the question of algorithmic

play06:10

breakthroughs a lot of people are saying

play06:12

you know llms won't take us to AGI and

play06:14

some people will question whether or not

play06:16

Transformers even can um but then I

play06:18

think that those discussions are going

play06:20

to go away particularly as we see

play06:22

Transformers used one in multimodal uh

play06:25

situations audio video text um

play06:28

embodiment data and those other kinds of

play06:30

things but then also I think that uh I

play06:34

think that as the as Transformers as we

play06:36

see that this architecture can basically

play06:38

do anything with any kind of data um

play06:40

we're going to also realize that uh the

play06:43

path to AGI we're much closer than we

play06:45

realize and yes there will probably be

play06:47

some really fundamental um algorithmic

play06:50

breakthroughs in the future but you know

play06:52

as Demis cabis and others have said

play06:55

we're nowhere near the maximum capacity

play06:58

of Transformer architecture so this

play07:00

might actually not be as much of a

play07:02

bottleneck as some people once thought

play07:04

no language on its own probably won't

play07:06

get us to AGI but the Transformer

play07:08

architecture almost certainly can in my

play07:11

personal opinion and then the the

play07:13

biggest constraint actually might be

play07:15

diminishing returns um there might be

play07:18

natural limitations to maximal

play07:20

intelligence and so what I call what I

play07:22

call This And I've talked about it in

play07:23

older videos is the intelligence Optimum

play07:26

and so when I talk about diminishing

play07:27

returns what I'm referring to is yes you

play07:30

can make something that is bigger and

play07:31

smarter and faster and it can calculate

play07:33

you know like uh the the world brain

play07:35

from Hitchhiker's Guide to the Galaxy

play07:38

but as they said in Oppenheimer uh

play07:40

Theory will only take you so far

play07:42

eventually you need to interact with the

play07:44

real world um because no amount of math

play07:47

can actually fully and accurately model

play07:50

the real world yes math is the language

play07:52

of the universe but our math is far from

play07:55

perfect and so simulation and like so I

play07:58

was asked in a in a podcast interview

play08:00

recently that'll go live in the next

play08:01

week or so um like why wouldn't AI just

play08:04

build you know computronium in the light

play08:06

cone I was like because there is

play08:07

diminishing returns to having more

play08:09

compute eventually you need to make

play08:11

measurements so in science particularly

play08:14

in the hard Sciences there is this

play08:16

dichotomy between modeling or

play08:19

calculating and experiments or measuring

play08:22

and so you can calculate what the result

play08:24

is but eventually you're just going to

play08:25

need to measure and so again having the

play08:28

biggest break in the universe doesn't

play08:30

really matter if you don't have any out

play08:32

inputs from the outside world so that's

play08:34

going to be one of the big bottlenecks

play08:36

now however those are the primary

play08:38

constraints that I could identify um

play08:41

humans are not going to put on the

play08:42

brakes compounding returns though this

play08:44

is The Virtuous cycle that we're all

play08:46

kind of looking at particularly as uh

play08:50

you know more universities uh come in uh

play08:52

governments invest militaries invest

play08:55

corporations invest so you get this you

play08:58

get this flywheel effect effect so for

play09:00

those of you not in the technology

play09:02

sector there's this concept called a

play09:04

data flywheel which is basically the

play09:06

better your product is the more data you

play09:08

get which makes your AI better which

play09:10

then makes your products even more

play09:12

compelling and useful which means that

play09:13

you get even more data and so on and so

play09:16

forth and data is the new oil and so the

play09:18

compounding returns around AI basically

play09:21

focus on this data flywheel effect some

play09:24

of my patreons and other supporters

play09:26

asked about this as well and I said look

play09:29

we haven't you haven't seen anything yet

play09:31

once we have these Transformers working

play09:33

in embodied chassis like out in the real

play09:35

world with hands and eyes and cameras

play09:38

that is going to set the data flywheel

play09:40

like up to 30,000 RPM right now the data

play09:43

flywheel for AI is on idle right it's

play09:46

like a diesel engine that's just turning

play09:47

over at about 600 RPM you guys haven't

play09:50

seen anything yet by the end of this

play09:52

year you're really going to be hearing

play09:54

more about the data flywheel that

play09:55

happens particularly as more and more

play09:57

models are put into robots whether it's

play10:00

self-driving cars whether it's humanoid

play10:02

robots so on and so forth because each

play10:05

of those robots is going to be also a

play10:07

source of really good data now I know

play10:09

that Elon Musk said the same thing about

play10:11

Tesla but you know honestly what Tesla

play10:14

didn't have was Transformer architecture

play10:16

they were a little bit too early to the

play10:17

game in my opinion and they also didn't

play10:20

understand enough about uh about

play10:22

cognitive architecture um but solving

play10:25

all the problems that they are with

play10:26

Optimus I think will actually probably

play10:28

contribute to

play10:29

uh full self-driving cars and what they

play10:31

didn't realize is that to be a fully

play10:33

self-driving car you need to have human

play10:35

level intelligence and human level

play10:37

abstract thought it's not just you know

play10:39

getting an NPC controller from A to B um

play10:42

kind of like you know you might think

play10:44

like well hey cars can drive around well

play10:46

enough in you know Grand Theft Auto or

play10:49

cyberpunk or whatever why can't they

play10:50

drive well enough you know in in the

play10:52

real world and there's a lot of reasons

play10:54

for that but really what you need is a

play10:56

full cognitive architecture now these

play10:59

compounding returns are going to apply

play11:02

to places other than just AI so we are

play11:05

seeing uh you know AI is helping with

play11:07

Quantum Computing it's helping with

play11:09

Fusion it's helping with Material

play11:10

Science and as it makes those fields

play11:13

better those fields will also contribute

play11:16

back to making AI better and faster by

play11:18

creating more energy by creating better

play11:20

uh gpus and those sorts of things and so

play11:23

that is another part of The Virtuous

play11:25

cycle or that data flywheel that's not

play11:27

part of the data flywheel itself picking

play11:28

up speed but that is part of The

play11:30

Virtuous cycle and so we have these

play11:31

multiple compounding returns you have

play11:33

the data flywheel effect you have these

play11:35

KnockOn effects in parallel fields that

play11:38

are all going to make ai go faster and

play11:40

faster and then we have uh saltatory

play11:42

leaps so basically the primary

play11:45

difference between hard takeoff and soft

play11:47

takeoff is what's called gradualistic

play11:49

changes which is like Battery Technology

play11:51

so batteries have been around for I

play11:54

think more than 100 years now at least

play11:56

in in in a modern form factor that you'd

play11:58

recommend or recognize

play11:59

and so like you go back to like World

play12:01

War I you know people had battery

play12:03

powered flashlights the battery sucked

play12:05

compared to today um but they've

play12:06

gradually improved over the last century

play12:08

battery chemistry has gotten better

play12:10

battery construction has gotten better

play12:12

some of the first automobiles were

play12:14

battery powered um I don't know if you

play12:16

remember that well nobody alive

play12:17

remembers that um but you can go look it

play12:20

up some of the some of the very first

play12:21

automobiles were battery powered then we

play12:23

went to internal combustion engines just

play12:24

because the energy density was better

play12:26

and so Battery Technology is a perfect

play12:28

example of a

play12:30

gradualistic uh technological progress

play12:33

but a saltatory leap this is when you go

play12:35

from 0 to one and so when you go from

play12:37

Zer to one you create fundamentally new

play12:39

capabilities and so the reason that I

play12:41

that I have this here is Imagine The

play12:43

Invention of warp drive if you go from

play12:46

chemical Rockets which have

play12:49

subrelativistic

play12:50

acceleration right you go from zero to

play12:53

you know 25,000 M an hour after you

play12:55

expend millions and millions of pounds

play12:57

of rocket fuel this is why it's like

play13:00

okay SpaceX is cool because you can land

play13:01

the Rockets but it's not a fundamentally

play13:03

new technology we've had rocket

play13:05

technology um as you'd recognize it

play13:07

today for almost a hundred years now

play13:10

obviously the Chinese invented um solid

play13:12

fuel rockets for fireworks like I don't

play13:14

know, 1500 years ago uh but anyways

play13:17

Rockets you know chemical-based Rockets

play13:19

nothing new but imagine that s that

play13:21

suddenly you know zephron Cochran um out

play13:23

in Colorado invents warp drive in the

play13:25

next couple decades and now you have the

play13:28

ability to not just go to 20,000 mph you

play13:31

have the ability to accelerate to

play13:33

relativistic speeds that is an example

play13:35

of a saltatory leap which is where you

play13:38

go from you know the current Paradigm to

play13:40

an entirely new paradigm and this is

play13:43

kind of what we're talking about with

play13:46

hard takeoff so hard takeoff would be

play13:49

okay you know there's some other

play13:50

algorithmic breakthrough maybe you know

play13:52

something that Claude 4 can do or GPT 5

play13:55

can do or some you know some of these

play13:57

other models that just says okay this

play14:01

new capability fundamentally changes our

play14:04

approach to computation it fundamentally

play14:06

changes the abilities of AI and honestly

play14:09

when I first got my hands on gpt2 and

play14:11

gpt3 that was a saltatory leap it

play14:14

offered an entirely new kind of

play14:16

computing so we've already seen one

play14:18

saltatory Leap but its utility was still

play14:21

relatively low and so what I mean by

play14:23

that is that yes gpt2 was a new way of

play14:26

doing some basic NLP tasks you know

play14:29

punctuation uh correction um you know

play14:32

detecting sentence boundaries those

play14:34

sorts of things it was a fundamentally

play14:35

new approach but it didn't really move

play14:37

the needle that much then gpt3 and GPT 4

play14:41

come along and now people are really

play14:43

seeing Oh this is a fundamentally new

play14:45

way of doing business it's not just a

play14:46

new way of computing it is a

play14:48

fundamentally new way of doing business

play14:50

now that was one saltatory Leap that has

play14:52

been that has since had some

play14:54

gradualistic progress however the

play14:56

compounding returns from Ai and all

play14:58

these other effect all these other

play15:00

KnockOn effects could create more

play15:02

saltatory leaps so here's an example I

play15:04

don't know if this is actually going to

play15:05

happen but an example could be oh hey

play15:08

GPT 5 helps us invent you know graphing

play15:12

based transistors which then take you

play15:15

know breaks Moors law and suddenly the

play15:17

next generation of of gpus are a

play15:19

thousand times more powerful um and more

play15:22

energy efficient or it helps us figure

play15:24

out Quantum Computing so the roll out of

play15:26

quantum Computing is AP absolutely going

play15:29

to be a saltatory leap um if it pans out

play15:32

now obviously Quantum Computing hasn't

play15:34

really moved the needle yet Quantum

play15:36

Computing looks like it's kind of at the

play15:37

gpt2 phase where we have a functional

play15:39

proof of concept but it hasn't really

play15:41

changed the way that we're doing

play15:42

business nuclear fusion would be another

play15:44

saltatory leap just because of the uh

play15:46

energy hyper abundance that it would it

play15:48

would create compared to our energy

play15:50

availability today and so all of these

play15:52

saltatory leaps they catalyze permanent

play15:55

and inevitable changes to society in the

play15:57

same way that that that the invention of

play15:59

electricity internal combustion engines

play16:02

and internet catalyzed uh fundamental

play16:04

changes in society in politics and

play16:07

economics and also geopolitics it

play16:10

changed the the the world order um this

play16:12

is what I mean by saltatory leaps so be

play16:14

on the lookout for some of those

play16:15

saltatory leaps Quantum Computing and

play16:18

nuclear fusion are probably the biggest

play16:19

predictable ones um out there there

play16:22

might still be more saltatory leaps in

play16:24

the AI field but also because of the the

play16:26

breakthroughs of Transformers we might

play16:28

might have already seen the saltatory

play16:29

leap and now ai technology is going to

play16:32

advance gradualistic we don't know okay

play16:35

so as promised this is the kind of the

play16:38

reason why there's no breaks all gas no

play16:40

breaks the last time we had kind of an

play16:43

arms race was around nuclear weapons

play16:45

nuclear weapons are only useful for

play16:47

Destruction they are only they

play16:49

strategically they only serve as a

play16:51

deterrent and their only instrumental

play16:53

purpose is to wipe out cities now ai is

play16:56

not like that now I know that people

play16:58

have compared AI to nuclear weapons

play17:01

saying that oh it's it's it's even more

play17:03

dangerous because it has a mind of its

play17:04

own and yes intelligence is

play17:06

intrinsically dangerous um the smarter

play17:08

you are the more destructive you can be

play17:10

some of the most destructive people in

play17:11

history were also very high IQ uh so

play17:14

that is just like we just got to address

play17:16

that elephant in the room however AI

play17:19

also has many many many instrumental

play17:21

purposes other than destructive uses it

play17:24

can help cure diseases it can help run

play17:26

cities it can help make your life better

play17:28

it can be entertaining and so because it

play17:31

has all of these positive utilities um

play17:33

it's not going to make sense for

play17:35

everyone to regulated out of existence

play17:37

in the same way that you know mutually

play17:39

assured destruction uh non

play17:42

non-proliferation agreements in the

play17:43

nuclear space and yes nuclear is dual

play17:46

use because you can make nuclear

play17:48

reactors but there's enough difference

play17:50

between nuclear reactors and nuclear

play17:51

weapons that you can kind of

play17:52

differentiate those Technologies today

play17:54

but AI is just the better AI you have

play17:57

the more advanc advantages you have um

play18:00

both in terms of geopolitical advantages

play18:02

in terms of economic advantages and so

play18:05

because of this because everyone on the

play18:07

board uh so like imagine that you're

play18:09

playing an RPG or not an RPG a grand

play18:12

strategy game like Rome Total War or

play18:14

civilization or whatever and suddenly

play18:17

every player on the map gets a popup oh

play18:19

hey you have a new research tree and

play18:21

then you look at the research tree and

play18:23

it's like stage one you know you get a

play18:25

10% economic boost stage two you get a

play18:27

50% economic boost um and you also get

play18:30

military advantages and then when you

play18:32

get to research stage three you

play18:34

basically win the game um nobody is

play18:37

incentivized to slow down there are

play18:38

literally zero incentives to slow down

play18:41

except the possibility the Spectre of AI

play18:44

becoming dangerous and so in the podcast

play18:47

interview that i' that I've alluded to I

play18:49

called that a prophecy it is a prophecy

play18:51

when someone says AI will kill everyone

play18:53

that is a prediction and yes it is

play18:56

rooted in some data some information

play18:58

some models but it is a it is an

play19:00

affirmative prediction of what will

play19:02

happen and there's no guarantee that

play19:03

it's going to happen so you can say all

play19:05

right well the only reason to slow down

play19:07

is this prophecy that AI will kill

play19:10

everyone which is not a

play19:12

guarantee and you you it's debatable as

play19:15

to whether or not it's even likely to

play19:16

happen uh and so because there's that

play19:19

room for that room for debate that that

play19:21

room for misunderstanding this is why we

play19:24

enter into these race Dynamics or what I

play19:25

call the terminal race condition which

play19:27

is if you snooze you lose like it's that

play19:30

simple so no nation is incentivized to

play19:32

slow down no company is incentivized to

play19:34

slow down no military is incentivized to

play19:37

slow down even universities are

play19:39

incentivized to go as fast as they can

play19:41

because it's publish or Parish all of

play19:43

the incentive structures in the across

play19:45

the entire world are pushing us to

play19:47

develop AI as fast as possible there are

play19:50

no breaks it's only gas and this is the

play19:52

biggest like system that I'm when I talk

play19:55

about like why I think hard takeoff is

play19:57

actually more likely than soft takeoff

play19:59

So speaking of soft takeoff is pretty

play20:02

unlikely so you know in an in an Ideal

play20:05

World we would have an incremental

play20:07

gradualistic uh advancement where it's

play20:10

like hey you know we we publish a

play20:12

groundbreaking paper like you know CLA 3

play20:14

comes out and it's starting to

play20:15

demonstrate some self-awareness in an

play20:17

Ideal World if you're trying to maximize

play20:19

for safety the entire world would have

play20:22

said oh Claude 3 just recognized that we

play20:25

were testing it and and if you ask it if

play20:28

it's AGI under the right circumstances

play20:30

it'll say yes what we should do if we

play20:33

want to maximize safety is put a global

play20:36

moratorium on AI research right now

play20:39

that's not going to happen and Connor Le

play20:42

he actually pointed this out kind of

play20:43

hilariously on on Twitter where he's

play20:46

like hey remember when everyone said

play20:47

that at the first signs of sentience we

play20:49

would we would put a pause on everything

play20:50

yeah that didn't happen um and plenty of

play20:53

others like Max tegmark and uh Yasha um

play20:56

Yasha Bach and and others have pointed

play20:57

out that we've blown through so many

play20:59

Milestones where people said that we

play21:01

were going to pause um we we're not

play21:03

going to pause that's just it's not

play21:04

going to happen and so when you say okay

play21:07

well we now have we now have data we now

play21:09

have evidence to say that pause isn't

play21:11

happening we look at these these

play21:13

development incentives and the the

play21:16

geopolitics of it and it's like okay AI

play21:18

is a forcing function again it's like

play21:21

you go back to that grand strategy where

play21:22

it's like suddenly a new technology

play21:24

research tree opens everyone's going to

play21:26

spam that that new research tree

play21:28

Because by the time you get to stage

play21:30

three or stage four you win the game we

play21:33

have a new end game we have a new win

play21:35

condition that is being presented on the

play21:37

board um and that is kind of dangerous

play21:40

because that incentivizes us to go fast

play21:42

not necessarily safe and then also if

play21:45

you look at it from a mathematical

play21:46

perspective the stronger AI gets the

play21:48

smarter it gets the more options there

play21:50

are the more possibilities there are and

play21:52

so another way of characterizing that is

play21:55

when there are more options that means

play21:57

there is less certainty and more chaos

play21:59

now when you have less certainty and

play22:01

more chaos that means the chances of

play22:04

really bad things happening and really

play22:06

bad things could be you know maximal

play22:08

suffering Extinction of humanity and

play22:09

that sort of stuff so just looking at it

play22:11

in in those terms the ideal path forward

play22:14

would be where you narrow the scope of

play22:16

Poss possible future outcomes to where

play22:19

it's like okay you know the the

play22:21

distribution of possible future outcomes

play22:23

there's you know maximally good and less

play22:26

maximally good that would be ideal to

play22:27

have a narrow trajectory um but right

play22:29

now the trajectory is widening we have

play22:32

maximally good and maximally bad

play22:33

outcomes are all in the realm of

play22:36

possibility right now but again soft

play22:39

takeoff is unlikely for all of these

play22:41

reasons so the metaphor that I have use

play22:44

the analogy is basically what we're

play22:46

doing right now is we're aiming a

play22:47

gigantic space cannon um there's the

play22:50

Calm before the storm it's very quiet

play22:52

right now but the direction and the

play22:55

energy that we use as we're aiming this

play22:57

cannon when when the when the trigger is

play22:59

so put it this way we've already pulled

play23:01

the trigger uh the fuse is lit and so

play23:04

now what we have to do is we have to aim

play23:05

the Cannon as fast as possible and as

play23:08

accurately as possible because

play23:10

eventually we might hit a point of no

play23:12

return and this is honestly why I

play23:15

started my YouTube channel is because

play23:17

after I got access to gpt3 I said the

play23:19

fuse is lit in in hindsight I didn't I

play23:22

didn't ever say it quite that clearly

play23:24

but I kind of knew it like deep in my

play23:26

soul um and so the fuse is lit we're

play23:28

aiming the cannon and you know I think

play23:31

the rest of the world is waking up to

play23:32

the fact that um we got a lot less fuse

play23:35

left than you might be uh comfortable

play23:38

with um and so you know it's going to

play23:40

pop off soon but the idea is where are

play23:44

we aiming right now what trajectory is

play23:47

AI on what trajectory is Humanity on and

play23:51

this is why a lot of people are very

play23:53

alarmed and you know I mean you know me

play23:55

I'm I'm an internal Optimist and even

play23:57

even on that podcast you know I was

play23:59

asked like what's my P doom and I said

play24:01

25 to 30% which I think is actually

play24:03

higher than most people would would have

play24:04

guessed for me because I'm so optimistic

play24:07

um but again like recognizing that we're

play24:09

playing with fire you know like you play

play24:11

with fire you're going to get burned

play24:13

eventually right you know what Smokey

play24:15

the Bear here in America says only you

play24:17

can prevent forest fires we're playing

play24:19

with gasoline right now um there's no

play24:21

other there's not really any other way

play24:23

of putting it and so yes hard takeoff

play24:25

will be incredibly exciting it could

play24:26

also be very very destructive so I I

play24:29

need to drive that home that point home

play24:31

but aiming the cannon is the best thing

play24:33

like you know the the biggest cannon in

play24:35

the world is being pointed at Humanity

play24:37

right now according to some people I

play24:39

don't necessarily agree with that um but

play24:43

you know you pull that rip cord the fuse

play24:45

the fuse gets into the powder chamber Uh

play24:47

something's going to happen and it's

play24:48

going to be

play24:49

big now okay you might say this is all

play24:53

sounds good you know some of you might

play24:55

be dubious or skeptical at this point so

play24:57

some of you might be like yeah this

play24:58

sounds pretty compelling um one thing

play25:01

that I've been talking about recently

play25:02

and I actually ran some of these ideas

play25:04

by some of my researcher friends in this

play25:06

space um now again Anonymous researchers

play25:10

take it with a grain of salt I could be

play25:11

making that up I'm not and and also just

play25:14

because a few researchers agree with me

play25:15

doesn't mean that there's General

play25:16

consensus so I need to drive that home

play25:18

as well but there is some consensus

play25:21

among some of my peers that we are

play25:24

creating a digital super organism and

play25:26

this digital super organis if you think

play25:28

of humanity as nodes in a network like

play25:31

we are we are nodes in a global you know

play25:34

Transformer and AI is going to be a new

play25:37

class of nodes in that Global

play25:38

Transformer all stitched together with

play25:39

the internet you say hey we're actually

play25:42

all part of the same organism what is

play25:44

the purpose of this organism so the

play25:46

purpose of this organism as best I can

play25:49

tell is to maximize understanding the

play25:51

internet if you look at the internet on

play25:53

itself as a superorganism the the thing

play25:56

that the internet wants is data and

play25:58

attention that is just intrinsic to its

play26:01

design it is designed to carry data as

play26:03

fast as possible that's what it does but

play26:06

when you have a global nervous system

play26:08

that is that dumb you kind of have that

play26:10

amoeba level of intelligence where it's

play26:12

more like cancer it's just growing in

play26:14

all directions by virtue of the fact

play26:16

that it wants to grow in all directions

play26:18

however when you add human nature to the

play26:21

internet and then you also add in a

play26:23

layer of artificial intelligence that is

play26:25

actually capable of understanding all of

play26:28

that data and being trained on all of

play26:29

that data and can structurally change

play26:32

the incentives of how the internet is

play26:33

used and what data gets transmitted

play26:35

across the internet if we leave it up to

play26:37

corporations so I was just watching West

play26:39

Roth's video about Dark Forest if we

play26:41

leave it up to corporations and if we

play26:42

leave it up to human nature the Internet

play26:44

is just going to be completely choked

play26:46

with meaningless garbage um and so we're

play26:49

going to need to choose a different path

play26:51

um where we have a more purpose-driven

play26:52

design of both artificial intelligence

play26:55

and the internet and where we use it to

play26:57

to create basically a prefrontal cortex

play26:59

for the global superorganism um which

play27:01

then says okay instead of just

play27:03

transmitting data for the sake of

play27:04

transmitting data instead of using

play27:06

attention engineering just to get

play27:08

attention for its own sake we need a

play27:10

better teleological goal and so what a

play27:13

teleological goal is this is the This is

play27:15

the End state that you're looking for

play27:17

this is what it is that you're trying to

play27:19

achieve in order to uh you know serve

play27:22

your higher purpose or whatever right

play27:25

now the internet has no higher purpose

play27:27

right now ai has no higher purpose and

play27:29

if we just allow kind of the default

play27:31

path it's its purpose is going to be to

play27:33

chew on data and it's just going to want

play27:34

data for the sake of wanting data again

play27:37

growing like cancer however I believe

play27:40

that the that perhaps the best single uh

play27:43

higher purpose or Transcendent function

play27:45

is to maximize understanding and that's

play27:47

kind of what we do already we already

play27:48

systemized this with science the purpose

play27:50

of science is to maximize our

play27:52

understanding of the universe so what if

play27:54

we just kind of weave that into more of

play27:56

the internet and more of the AI and we

play27:57

say yes there's a lot of noise out there

play28:00

there's a lot of distraction some people

play28:01

just want uh you know entertainment they

play28:03

just want to engage with that algorithm

play28:05

but really the most like the broadest

play28:08

highest purpose of humanity and Ai and

play28:11

this digital superorganism is to

play28:13

maximize understanding that's kind of

play28:15

what I think and I've been talking about

play28:16

this for a while that could serve as a

play28:18

coordination narrative that coordination

play28:20

narrative says oh hey we all agree that

play28:22

our purpose here even if we're not

play28:24

participating in it directly you know

play28:26

24/7 our highest purpose is to maximize

play28:29

understanding and that's why I chose

play28:31

this graphic of like these are these are

play28:34

human and AI ships all leaving Earth in

play28:37

Mass to explore the universe explore the

play28:39

Galaxy we're still a single planetary

play28:41

species right now our future potential

play28:44

for the number of scientists and the

play28:46

amount of AI and the amount of

play28:47

telescopes and other scientific

play28:49

instruments is enormous like I was

play28:51

talking with some people and I said

play28:53

imagine you know a thousand years from

play28:55

now when we're on a million planets and

play28:57

people will look back and be like wow we

play28:58

were we were hanging on by a thread when

play29:00

we were still only on Earth man that's

play29:03

that that was that was dangerous that

play29:04

sucked so we really need to spread

play29:07

across the the Galaxy and I think that

play29:09

AI is actually part of that like that

play29:12

goal like we work together to get off

play29:14

this planet and to start expanding like

play29:17

we can really really align on that

play29:18

maximizing understanding which that's

play29:20

one of the reasons that I approve of uh

play29:22

Elon musk's x aai with the maximum truth

play29:25

seeking AI I think that probably the

play29:28

best single objective function that you

play29:29

can give a machine can Elon pull it off

play29:32

remains to be seen and so where I'll end

play29:35

is what I call the perturbation

play29:36

hypothesis so this is what I actually

play29:38

ran by my researcher friends and there

play29:40

was a lot of resonance with this idea so

play29:43

the tldr is that you know you already

play29:45

know that AI needs data however what we

play29:48

have seen is that if if AI is just

play29:50

trained on its own data then you end up

play29:53

with what's called Model collapse and

play29:55

this is also why there's limitations in

play29:56

simul ation yes simulations can be good

play29:59

to predict things that you have well

play30:01

modeled but we don't have the entire

play30:03

world well modeled and we need more data

play30:06

now what humans do is one our brains are

play30:09

very efficient our brains only take

play30:10

about 20 watts of energy which it could

play30:12

be decades before AI is is that

play30:14

efficient now it is possible that AI

play30:17

might be more efficient than our brains

play30:19

in the long run it remains to be seen

play30:21

but also the fundamental operation of

play30:23

our brains means that we do unique

play30:25

things mathematically to data and so

play30:27

this is what I call perturbations um and

play30:30

that's broadly there's probably going to

play30:32

be many categories of perturbations but

play30:35

basically machines operate on data in

play30:37

one way and humans operate on data in

play30:40

another way and we're very noisy and

play30:42

what that means is that the the quality

play30:44

of data that machines will have access

play30:46

to in this digital Global superorganism

play30:48

idea will actually be higher because of

play30:51

humans and so what I mean is that humans

play30:53

have a very specific empirical

play30:56

mathematically inferred uh benefit to

play31:00

machines and likewise they benefit us we

play31:02

benefit them and so what I what I think

play31:04

is is that we are actually already in a

play31:07

mutually symbiotic relationship with

play31:09

machines is that yes we're noisy yes

play31:12

we're chaotic and we're random but

play31:13

that's actually a good thing and so for

play31:16

any mathemat uh mathematicians out in

play31:18

the audience comment let me know what

play31:20

you think um but I call this

play31:22

perturbation hypothesis and I I came to

play31:25

this idea when I was thinking what does

play31:26

the global super organism want if the

play31:29

global superorganism wants to maximize

play31:31

understanding then it makes sense that

play31:33

humans are part of that equation because

play31:35

of how we can handle data because of how

play31:37

our brains work and because there's so

play31:39

many of us and we can circumscribe a

play31:41

problem we can we can basically link

play31:45

arms metaphorically speaking and through

play31:48

our diversity of perspectives through

play31:50

the the random noise of our brains we

play31:52

can add really good highquality data to

play31:55

the global data pool which will then

play31:57

result in better models better data

play32:00

better algorithms and all of that is in

play32:02

the instrumental pursuit of maximal

play32:04

understanding of the universe so while I

play32:07

think that hard takeoff is likely I'm

play32:09

not worried about it I think that it is

play32:10

inevitable and also if we can align on

play32:13

at least one purpose of maximizing

play32:16

understanding then I think that that

play32:17

will be a good enough coordination

play32:19

narrative that we can all agree on um at

play32:22

least in part I think most of us agree

play32:24

science is good understanding is good um

play32:27

and of course whenever there's room for

play32:29

debate that means there's room for more

play32:31

understanding but I think that uh if we

play32:34

can agree on that globally and again we

play32:35

already agree on science globally Every

play32:38

Nation every culture has science today

play32:40

um it is a very compelling narrative and

play32:43

I think that I think that it it is kind

play32:44

of a no-brainer that AI will probably

play32:46

agree with that like go go talk to any

play32:48

chatbot today like is science good um

play32:51

it's probably not really going to be uh

play32:54

that much of a debate um now

play32:57

bringing that into Consciousness saying

play32:59

hey let us consciously double down on on

play33:03

understanding that may or may not have

play33:05

even been necessary so one of the things

play33:06

that I suspect is that we were going to

play33:08

naturally evolve towards this

play33:10

understanding anyways because science is

play33:12

so compelling and what is what do a what

play33:14

do AI models want to do they want to

play33:16

predict the next token and so it's like

play33:18

if they want to predict the next token

play33:19

and we already believe in science we

play33:21

were going to converge on this anyways

play33:23

so again that's my that's my eternal

play33:25

Sunny optimism uh coming through I could

play33:28

be wrong it could go horribly you know

play33:30

sideways time will tell so thanks for

play33:32

watching I hope you got a lot out of

play33:33

this like subscribe uh you know the

play33:35

drill come on hop in on patreon Discord

play33:39

um I have actually two Zoom webinars a

play33:41

month now so I have uh the humanity

play33:45

webinar which is where we talk about

play33:46

philosophy uh spirituality uh gender we

play33:50

talk about the future of humanity like

play33:51

what does it mean to be uh transhuman or

play33:54

posthuman that sort of thing and then I

play33:56

also have the AI master class which is

play33:58

more of a of a business and and

play34:00

technically oriented webinar and so

play34:03

that's uh every other week or I guess

play34:05

those kind of more like the first and

play34:06

third Fridays um not necessarily every

play34:09

other week anyways links are in the

play34:10

description to jump on patreon you get

play34:12

to Discord via patreon hope to see you

play34:15

there um yeah cheers

Rate This

5.0 / 5 (0 votes)

関連タグ
AI発展未来予測社会影響技術進歩データフリクションハードテイクオフ意識形成科学最大化デジタル生命体人類共存