i'm EXPOSING this NO MATTER what.

Elon Musk Fan Zone
27 Feb 202449:47

Summary

TLDRThe transcript covers a wide range of topics related to AI, its development, and potential impacts. Musk discusses the pace of AI advancement, the need to balance inequality with recognizing common ground, the quest for AI safety amid rapid progress, and more. He contemplates existential threats like the extinctionist philosophy being implicitly programmed into AI. Overall, Musk grapples with the duality of AI possibly bringing salvation or annihilation for humanity.

Takeaways

  • 🀖 AIは、栞兵噚や地球枩暖化ず䞊び、文明を砎壊する朜圚的な胜力を持぀。
  • 🚀 デゞタルむンテリゞェンスは生物孊的むンテリゞェンスを倧きく䞊回るこずが明癜。
  • 📱 AIは、効果的なプロパガンダを䜜成し、遞挙や瀟䌚の方向性に圱響を䞎えるために䜿甚される可胜性がある。
  • 🕊 人類は、存圚的脅嚁よりも日垞の出来事に泚意を払いがち。
  • 🔍 歎史における人類の進歩は、技術の急速な発展によっお、珟代では前䟋のない繁栄を享受しおいる。
  • 💡 教育システムの改善は、子䟛たちの将来ず平和の実珟に䞍可欠。
  • 🌍 AIの安党性を確保するためには、真実远求ず奜奇心を最倧化するこずが重芁。
  • 🛰 珟代の技術、特にAIは、人類の歎史の䞭で最も速い速床で進化しおいる。
  • 📡 瀟䌚は、技術的進歩ず瀟䌚的進化の間の成長するギャップに盎面しおいる。
  • 🕹 AIの発展は、人類にずっお救枈たたは砎滅の䞡方の可胜性を秘めおいる。

Q & A

  • AIが文明を砎壊する可胜性はありたすか

    -はい、AIは文明を砎壊する朜圚胜力を持っおいるずむヌロン・マスクは述べおいたす。特に、顔認蚌チップを䜿った暗殺ドロヌンの矀れを䜜成するこずが珟圚の技術でも可胜であり、これは文明に察する盎接的な脅嚁ずなり埗たす。

  • AIが人類にずっお脅嚁ずなる理由は䜕ですか

    -AIが脅嚁ずなる䞻な理由は、その胜力が人間の知胜を倧幅に超える可胜性があるためです。これにより、AIが制埡䞍可胜になり、人類に察しお予枬䞍胜な行動を取る可胜性がありたす。

  • AIによるプロパガンダはどのように機胜したすか

    -AIは、メッセヌゞを粟緻化し、゜ヌシャルメディアのフィヌドバックを即座に分析しお、そのメッセヌゞを改善するこずで、非垞に効果的なプロパガンダを䜜成するこずができたす。これにより、瀟䌚の方向性や遞挙の結果に圱響を䞎えるこずが可胜になりたす。

  • AIの安党性を確保するために䜕が必芁ですか

    -むヌロン・マスクは、AIの安党性を確保するためには、真実远求ず奜奇心を最倧化するこずが重芁であるず述べおいたす。たた、芏制や安党察策の迅速な導入も必芁ずされおいたす。

  • AI技術の発展速床はどの皋床ですか

    -AI技術は非垞に迅速に発展しおおり、その蚈算胜力は玄6ヶ月ごずに10倍に増加しおいるずされおいたす。この速床は、人類が察応するにはあたりにも速く、芏制や安党察策を講じる䞊で倧きな課題ずなっおいたす。

  • AIによる怜閲はどのような問題を匕き起こしたすか

    -AIによる怜閲は、䞍適切ず芋なされた内容の抑制を通じお、蚀論の自由の基瀎を䟵食する可胜性がありたす。これは、瀟䌚におけるオヌプンな議論や情報の流通に悪圱響を䞎える恐れがありたす。

  • ネットワヌク接続の改善がAIずの関係にどのように圱響するずむヌロン・マスクは述べおいたすか

    -むヌロン・マスクは、人間ずAIの間の垯域幅を改善するこずによっお、人間の意志ずAIの行動の間により良い調和をもたらすこずができるず述べおいたす。これにより、人間ずAIのより良い統合が可胜になるず考えられおいたす。

  • AI芏制に関しおむヌロン・マスクはどのような提案をしおいたすか

    -むヌロン・マスクは、AI技術の朜圚的な危険性を管理するために、AI芏制機関を蚭立するこずを提案しおいたす。この芏制機関は、公共の安党を保護するために、AI技術の開発ず利甚を監芖する責任を負うこずになりたす。

  • AIず人間の関係においお垯域幅の問題はどのように圱響したすか

    -垯域幅の問題は、人間がAIやコンピュヌタヌに情報を送信する速床が非垞に遅いこずを意味したす。これにより、人間ずAIの間のコミュニケヌションの効率性が䜎䞋し、統合の障害ずなりたす。

  • むヌロン・マスクはAIの未来に぀いおどのように感じおいたすか

    -むヌロン・マスクはAIの未来に察しお二重の感情を抱いおいたす。䞀方で、AIが人類にずっお非垞に有益である可胜性があるず信じおいたすが、他方で、その発展がもたらす朜圚的な危険性にも匷く懞念を衚明しおいたす。

Outlines

00:00

🀖 AIの脅嚁ず人類の未来

この段萜では、むヌロン・マスクがAI人工知胜に察する懞念を衚明しおいたす。圌はAIが文明を砎壊する可胜性があるず指摘し、デゞタルむンテリゞェンスが生物孊的むンテリゞェンスを倧幅に䞊回るこず、そしおAIが非垞に効果的なプロパガンダを䜜成し、瀟䌚や遞挙に圱響を䞎える胜力に぀いお議論しおいたす。さらに、珟代瀟䌚がこの皮の存圚脅嚁に察しおどれほど無芖しおいるかに぀いおも觊れおいたす。

05:01

🌍 瀟䌚的正矩ず技術の進歩

第二段萜では、マスクが瀟䌚的正矩ずテクノロゞヌの関係に焊点を圓おおいたす。圌は、人類が過去に比べお今日はるかに繁栄しおいるこずを指摘し、経枈的な成長が可胜であるこずを匷調しおいたす。たた、圌は技術的進歩がもたらす平等な機䌚、特に情報ぞのアクセスの平等化に぀いお語り、性別や信念に関係なく個人を胜力に基づいお評䟡するこずの重芁性を匷調しおいたす。

10:04

🚀 人類の未来ず宇宙探査

第䞉段萜では、マスクが人類の未来ず宇宙探査に぀いおのビゞョンを共有しおいたす。圌は、情報ぞのアクセスが増加し、病気の治療法が改善されおいる珟代の技術的進歩を肯定的に評䟡しおいたす。さらに、圌は人類が倚惑星皮ずしお宇宙を探玢するこずの重芁性を匷調し、技術的進歩が人類にもたらす可胜性のある利益に぀いお議論しおいたす。

15:06

🔬 AI技術の急速な進化

この段萜では、マスクがAIの急速な発展ずその朜圚的なリスクに぀いお語っおいたす。圌はAIの蚈算胜力が6ヶ月ごずに10倍に増加しおいるこず、そしおAIが人類にずっお存圚する最倧の脅嚁である可胜性を指摘しおいたす。圌はたた、AIがヘむトスピヌチの内容を枛少させたにも関わらず、フリヌスピヌチぞの懞念が高たっおいるこずに蚀及しおいたす。

20:06

🌐 デゞタルスヌパヌむンテリゞェンスずその圱響

第五段萜では、マスクがデゞタルスヌパヌむンテリゞェンスの抂念ず、それが人類にずっお持぀可胜性のある未知の圱響に぀いお語っおいたす。圌はこの技術をブラックホヌルに䟋え、その未来が予枬䞍可胜であるこずを匷調しおいたす。圌はAIの安党性を確保するために真実を远求し、奜奇心を促すこずの重芁性を匷調しおいたす。

25:08

💡 人間ずAIの将来的な関係

第六段萜では、マスクが人間ずAIの将来的な関係に぀いおの考えを共有しおいたす。圌は人間ずデゞタルデバむス間の垯域幅を改善するこずで、人類がAIずより調和的な関係を築けるず提案しおいたす。たた、AIが倫理的な基準に埓っおプログラムされるべきであるずしながらも、その耇雑さず朜圚的なリスクを指摘しおいたす。

30:10

🌏 AI芏制ず公共の安党

最終段萜では、マスクがAIの芏制ず公共の安党に関する芋解を述べおいたす。圌はAIが公共に察しお朜圚的な危険をもたらすため、芏制が必芁であるず䞻匵しおいたす。たた、圌は䞖界のリヌダヌたちずAIのリスクに぀いお話し合っおおり、特に䞭囜がAI芏制を真剣に怜蚎しおいるこずを指摘しおいたす。

Mindmap

Keywords

💡AIの安党性

AIの安党性は、人工知胜AI技術が人類にずっお有益であるように、そしお朜圚的な害やリスクを最小限に抑えるための措眮や考慮事項を指したす。このビデオスクリプトでは、AIが人類にずっお存続の脅嚁になり埗るずいう懞念が衚珟されおおり、AIの発展ず利甚における安党性の確保が重芁なテヌマずなっおいたす。䟋えば、プロパガンダの䜜成や遞挙ぞの圱響など、AIが悪甚される朜圚的なリスクが挙げられおいたす。

💡デゞタルスヌパヌむンテリゞェンス

デゞタルスヌパヌむンテリゞェンスは、人間の知胜をはるかに超える人工知胜の圢態を指したす。このスクリプトでは、デゞタルスヌパヌむンテリゞェンスが生物孊的な知胜を倧きく超えるず蚀及されおおり、その未知の胜力や朜圚的な圱響に぀いおの議論がなされおいたす。䟋えば、この皮のAIが人類の存続に察しおどのような脅嚁をもたらすか、たたその進化の速床に぀いお懞念が衚明されおいたす。

💡絶滅䞻矩哲孊

絶滅䞻矩哲孊は、人類を含む地球䞊の生呜の絶滅を促進たたは容認する考え方を指したす。ビデオスクリプトでは、AIにこのような哲孊が暗黙的たたは明瀺的にプログラムされるこずの危険性に぀いお譊告しおいたす。たずえば、極端な環境保護䞻矩者の芳点から、人類が地球にずっおの害悪ず芋なされる堎合、AIがそのような思想に基づいお行動するこずが懞念されたす。

💡プロパガンダ

プロパガンダは、政治的なメッセヌゞや意芋を広めるために䜿甚される情報やアむデアです。スクリプトでは、AIを利甚しお非垞に効果的なプロパガンダを䜜成する可胜性が瀺されおおり、これが瀟䌚に䞎える圱響や遞挙ぞの干枉などのリスクに぀いお議論されおいたす。AIがリアルタむムでメッセヌゞを調敎し、瀟䌚的フィヌドバックに基づいおその効果を高める胜力を持぀こずが指摘されおいたす。

💡AIによる怜閲

AIによる怜閲は、䞍適切たたは有害ず芋なされるコンテンツをフィルタリングたたは削陀するためにAI技術を䜿甚するこずを指したす。このスクリプトでは、AIが瀟䌚的な䌚話や衚珟の自由にどのように圱響を䞎えるかに぀いおの懞念が瀺されおいたす。AIによる怜閲がいかにしおヘむトスピヌチの枛少に぀ながったか、しかし同時に衚珟の自由の䟵害のリスクもあるこずが議論されおいたす。

💡リニア察指数関数的脅嚁

リニア察指数関数的脅嚁は、埓来の線圢的なアプロヌチが、急速に進化する技術や脅嚁この堎合はAIに察凊するには䞍十分であるずいう抂念です。スクリプトでは、AIの進化の速床が人間の察応胜力を遥かに䞊回っおいるず指摘されおおり、これに察する適切な芏制や察策を講じるこずの困難さが議論されおいたす。

💡モラル絶察䞻矩

モラル絶察䞻矩は、善ず悪が絶察的な基準で存圚するずいう考え方を指したす。ビデオスクリプトでは、゚ロン・マスクがモラル絶察䞻矩者であるず述べ、善悪を盞察的な力のバランスではなく、絶察的な道埳基準で刀断すべきだず䞻匵しおいたす。これは、AIの道埳的・倫理的なプログラミングに関する議論にも関連しおおり、AIが人間の䟡倀芳や倫理芳をどのように反映すべきかに぀いおの考察を提起しおいたす。

Highlights

AI's potential to surpass human intelligence and pose existential threats to civilization.

AI could relegate humanity to a minor role, similar to the impact of Homo sapiens on other primates.

Current technology allows for the creation of autonomous drones capable of targeted assassinations.

AI's effectiveness in creating highly persuasive propaganda, influencing societal views and elections.

The exponential pace of AI development outstripping linear regulatory responses.

The need to reconsider the assumption that the weaker party is always morally right.

Elon Musk stresses the importance of addressing educational indoctrination for a peaceful future.

The significance of moral absolutism in evaluating actions and intentions.

Humanity's unprecedented access to knowledge and the need for internet access to empower global learning.

The critique of legacy media and its competition with modern information platforms.

The call for a merit-based society that values skills and accomplishments over identity.

The dangers of AI-driven censorship on social media platforms.

The potential for AI to be programmed with an extinctionist philosophy.

The challenge of ensuring AI aligns with human values and ethics.

The concept of digital superintelligence as a transformative or potentially destructive force.

Transcripts

play00:00

AI will destroy Humanity we had nuclear

play00:02

bombs which are could potentially

play00:04

destroy civilization obviously uh we

play00:07

have ai which could destroy civilization

play00:09

uh we have global warming which could

play00:11

destroy civilization or or at least

play00:12

severely

play00:14

disrupt civilization digital

play00:16

intelligence will exceed biological

play00:18

intelligence by a substantial margin

play00:21

it's obvious I'm not paying attention we

play00:23

worry more about what what name somebody

play00:25

called someone else than whether AI will

play00:27

destroy Humanity that's insane like

play00:29

children in a playground Humanity really

play00:31

is not evolved to think of existential

play00:34

threats in general we're evolved to

play00:36

think about things that are very close

play00:38

to us near term to to be upset with

play00:41

other humans and and not not really to

play00:43

think about things that could destroy

play00:45

Humanity as a whole um excuse me how

play00:47

could AI destroy civilization you know

play00:51

it would be something in the same way

play00:53

that humans destroyed the habitat of

play00:57

primates I mean it's it wouldn't

play00:59

necessarily be destroyed but we might be

play01:02

relegated to a small corner of the world

play01:04

when Homo sapiens became much smarter

play01:06

than other primates I pushed all the

play01:08

other ones into small habitats they're

play01:11

just in the way could an AI even in this

play01:14

moment just with the technology that we

play01:16

have before us be used in some fairly

play01:19

destructive ways you can make a swarm of

play01:21

assassin drones for very little money by

play01:23

just taking the the the face ID chip

play01:27

that's used in cell phones and uh having

play01:30

a small explosive charge and and a

play01:32

standard drone and have them just do a

play01:34

grid sweep of the building until they

play01:37

find the person looking for Ram into

play01:39

them and and explode you can do that

play01:41

right now no extra no new technologies

play01:42

needed right now probably a bigger risk

play01:45

than than being hunted down by a drone

play01:49

is that uh AI would be used to make

play01:52

incredibly effective

play01:54

propaganda uh that would not seem like

play01:56

propaganda so these are deep FS yeah

play01:59

influence

play02:00

the direction of society influence

play02:03

elections artificial intelligence just

play02:06

hones the message hones the message

play02:08

check looks the feed looks at the

play02:09

feedback makes this message slightly

play02:11

better within milliseconds it could it

play02:14

can um adapt its message and and shift

play02:17

and react to news and and there's so

play02:20

many uh social media accounts out there

play02:22

that are not people like how do how do

play02:26

you know it's a person not a

play02:27

person one reason that regulators and

play02:30

others are a little bit in denial about

play02:32

this is the speed the pace of change

play02:36

what is the consequence of that speed of

play02:39

change the way in which a regulation is

play02:41

put in place is slow and linear right

play02:43

and we are facing an exponential threat

play02:46

and if you if you have a linear response

play02:48

to an exp exponential threat it's quite

play02:50

likely the exponential threat will win

play02:52

well I I think we really need to to stop

play02:55

this principle that the the weaker nor

play02:58

normally weaker part is always right

play03:01

this is simply not true we just we have

play03:03

to get rid of the rule that that if

play03:06

you're weaker you're automatically good

play03:08

that's that's obviously makes no sense

play03:12

the crowd falls into an eerie silence as

play03:14

Elon Musk delivers a stark warning about

play03:16

the current state of the world capturing

play03:18

everyone's attention we have many things

play03:21

today that we did not have in the past

play03:24

um we are far more prosperous uh all of

play03:26

humanity is far more prosperous today

play03:28

than it was at the times in the

play03:32

fast yeah I think generally people

play03:35

should always be wary that they may have

play03:38

um e either consciously or press mostly

play03:41

subconsciously internalize the notion of

play03:43

a a zero sum game or a fixed P um and if

play03:48

if you internalize that that

play03:50

there's that everything's Zero Sum

play03:53

meaning like in order for me to get

play03:54

ahead someone else has to not get ahead

play03:57

um or for me to have stuff someone else

play03:59

must not have stuff Elon Musk shocks the

play04:02

Audience by highlighting the pervasive

play04:03

violence against the innocent that is

play04:05

unfolding in the world prompting

play04:07

reflection and

play04:09

concern I think maybe the most shocking

play04:11

thing was to

play04:12

see uh the the Delight in

play04:16

innocent people like the Delight in kids

play04:20

and defenseless woman and man and there

play04:24

it there was no remorse quite the

play04:28

opposite in a cult to action Elon

play04:30

emphasizes the urgent need to fix the

play04:32

education system for the sake of our

play04:34

children's future a plea that resonates

play04:38

profoundly that fundamentally has to be

play04:40

addressed or there will not be peace uh

play04:44

the the the education of kids

play04:46

and um the indoctrination of hate into

play04:50

kids and has to has to

play04:55

stop if you have that axiomatic flaw

play04:58

then then that then that's what what it

play05:01

needs to be done is to to fix that

play05:03

acatic flow because it is false um there

play05:07

it's not a zero sum game we can

play05:09

absolutely grow and have grown and the

play05:12

evidence is overwhelming and we have

play05:14

grown the output of goods and

play05:18

services I mean that requires a level of

play05:21

indoctrination that is uh extremely

play05:25

intense

play05:27

um so so I think the to solve that you

play05:30

have to address the source of the

play05:33

indoctrination cuz no one no one should

play05:35

ever be glad about some some

play05:39

child you know when I was in I was like

play05:43

that was my top recommendation is like

play05:45

you got to make

play05:46

sure um you know I understand the need

play05:49

for

play05:50

this to to invade and unfortunately Su

play05:54

of people will there's no way around it

play05:57

Elon delves into the complex ities of

play06:00

morality asserting that there's both

play06:02

good and bad in absolute terms

play06:04

challenging the audience to consider

play06:06

absolute moral standards um if you are

play06:09

in courts oppressed or or the weaker

play06:11

party it doesn't mean you're right um

play06:14

because if some of those you know we

play06:17

weaker uh groups want to annihilate you

play06:20

that does not make them

play06:22

good

play06:23

um you know you know it often makes

play06:27

sense where it's like okay you don't

play06:28

want to beat up with someone's smaller

play06:29

and weaker than you um but if that if

play06:33

that smaller group wants to you that

play06:37

they're

play06:38

bad

play06:40

okay um I mean I'm a big believer in

play06:42

moral absolutism not moral relativism

play06:45

there is there's good and bad in the

play06:48

absolute um and you judge any group or

play06:51

individual against absolute moral

play06:55

standards not whether they they're the

play06:57

so-called oppressed or oppressor just on

play07:00

absolute moral terms are they doing good

play07:03

things do they want toip some people

play07:06

that's bad doesn't matter who they are

play07:10

Elon characterizes the present era as

play07:12

the most interesting of times inviting

play07:15

the crowd to ponder the profound shifts

play07:17

and challenges facing Society I mean it

play07:20

wasn't that long ago where you know we

play07:22

would count a good year as one where

play07:25

well the bonic plague wasn't that bad

play07:28

only 10

play07:29

ENT um you know we uh not that many

play07:33

people stared through the winter um we

play07:36

only lost you know 5% due of our

play07:40

population due to raids from other

play07:42

tribes you know basically life used to

play07:44

be very rough in the old days um and uh

play07:48

it's if if they could see us now they'd

play07:51

be like what are you guys complaining

play07:52

about this is

play07:55

amazing um you know not having to worry

play07:58

about um food food for I mean we were we

play08:01

were food constrained uh for you know

play08:04

probably the last 100,000 years until

play08:08

recently so you know really the the

play08:12

present day future is is amazing

play08:14

compared to the past and anyone who

play08:16

doesn't think it's amazing is not a good

play08:18

student of

play08:19

History um so I think we live in the

play08:23

most interesting of times and probably

play08:25

the best of times musk critiques Legacy

play08:28

Media pointing out their ponant for

play08:30

attempting to cancel certain platforms

play08:33

raising questions about the freedom of

play08:35

expression well I mean the reality is

play08:37

that X is competition for the Legacy

play08:40

Media so

play08:43

uh you know X is is where people go to

play08:46

get the most current news and learn

play08:48

about the world so leg you know the the

play08:51

Legacy Media is our direct competitors

play08:53

so they're really going to find trying

play08:54

to every angle to try to cancel X it's

play08:58

that's I mean if you want to know why

play09:00

things are happening look at the

play09:01

incentives you know so and and Legacy

play09:05

Media had a tough time with respect to

play09:08

uh usage um the numbers I saw was that

play09:12

the sort of traditional print uh cable

play09:16

television uh viewership went down

play09:18

something like 20 30% last year on the

play09:21

other hand X went up roughly that same

play09:24

roughly 20 30% so it's a direct

play09:27

competition for people's attention

play09:30

so if there's some attack they can Lev

play09:32

Levy against me they will the Visionary

play09:35

entrepreneur advocates for a return to

play09:37

merit-based evaluation urging Society to

play09:40

judge individuals based on their

play09:42

competence not on factors like gender or

play09:45

belief I think we need to return to what

play09:48

it what where things were or mostly were

play09:51

which is a focus on on Merit and and it

play09:56

doesn't matter whether you're a man

play09:58

woman

play10:00

uh you know what race you are what

play10:04

beliefs you have what matters is you

play10:07

know how good are you at your job or or

play10:10

how what are your skills you know um you

play10:13

know you could be a three-legged green

play10:15

moan uh you know wears a kimona and

play10:19

drinks the ax milk who cares it doesn't

play10:21

matter you know um it what matters is

play10:25

like how good is your work that's it

play10:31

um that that that's that's the that's

play10:34

the least sort of racist you can be is

play10:37

just care about the work that somebody

play10:39

does and not anything else um that's

play10:41

that's that's what the focus needs to

play10:44

be to return

play10:47

to it really has come completely full

play10:51

circle from um or or 180° from what has

play10:57

historically been the case so through

play10:59

most of History the operating principle

play11:02

has been uh might makes right

play11:05

so yeah for really up until modern times

play11:11

uh might makes right was the if you were

play11:15

stronger you were right um now now we've

play11:19

sort of flipped it to know if you're

play11:21

weaker you're right but but but neither

play11:23

is true there is there is uh rightness

play11:26

independent of strength or weakness

play11:29

um just because somebody's strong

play11:32

doesn't mean they're right and doesn't

play11:33

because somebody's weak doesn't mean

play11:35

they're right you have to look at morals

play11:36

in the

play11:37

absolute musk highlights the need to

play11:39

counteract indoctrination that

play11:41

negatively influences children

play11:43

underlining the importance of fostering

play11:45

critical thinking and independent

play11:48

perspectives but the the the most

play11:50

important thing is to ensure that

play11:53

afterwards that uh the indoctrination

play11:57

where kids are taught from as soon as

play11:59

they

play11:59

can uh understand language that their

play12:02

goal is

play12:04

to and and if you're told that from when

play12:07

you're a toddler well you're going to

play12:09

believe it and that needs to

play12:13

stop I I think it is actually human

play12:16

nature to love Humanity unless you are

play12:19

indoctrinated

play12:21

otherwise so uh I think the actual

play12:24

default for most people is to love

play12:26

Humanity um and to love being around

play12:29

their fellow humans um you can take for

play12:32

example like what's one of the worst

play12:33

punishments in in prison is solitary

play12:37

confinement and all solitary confinement

play12:39

means is that you're you're you don't

play12:42

get to hang out with the other prisoners

play12:43

which which might not be the best group

play12:45

of people to hang out with um but even

play12:48

that is considered a terrible punishment

play12:50

to not be able to hang out with other

play12:52

prisoners so in Truth uh I by I think in

play12:55

our nature we all love Humanity unless

play12:58

we are indoctrinated otherwise and so we

play13:01

have to stop that

play13:02

indoctrination Elon Musk encourages

play13:05

seizing the unprecedented access to

play13:07

Global Knowledge emphasizing the

play13:09

transformative power of information

play13:10

available at our

play13:12

fingertips well um I do I do put post a

play13:16

lot on the

play13:17

xplatform um you know sometimes 100

play13:21

times a day so in once in a while I'll

play13:24

do something dumb um for sure

play13:27

um but I I I really um you know I I try

play13:32

to say things that I think are

play13:34

interesting or funny um I mean there

play13:38

must be some reason why 169 million

play13:41

people follow

play13:42

me I guess I don't know um I must be

play13:46

keeping them amused in some way um so

play13:51

amuse Entertain You

play13:53

know have opinions on something

play13:56

sometimes they're wrong sometimes

play13:57

they're right um

play14:00

and um you know for things like

play14:03

Community notes it applies to me as well

play14:05

as it applies to anyone else so if I say

play14:07

something that's incorrect or you know

play14:10

not full context then Community notes

play14:12

will correct me very quickly

play14:15

so

play14:17

um but it's only me doing these posts

play14:20

ever I don't have a team or anything uh

play14:23

so uh in fact I generally would

play14:26

recommend for leaders of the world to

play14:29

just literally post your own

play14:32

stuff and once in a while you make a

play14:34

mistake don't worry about it in a

play14:36

thought provoking moment musk suggests

play14:38

that societal focus should balance

play14:40

addressing inequality with recognizing

play14:42

areas of Common Ground challenging

play14:44

prevailing perspectives this and there's

play14:48

many wonderful interesting things that

play14:50

are happening besides space exploration

play14:53

obviously as time goes by we improve our

play14:55

ability to cure cancer to cure many

play14:57

diseases um there's increased access to

play15:01

information and people talk a lot about

play15:03

inequality but what about the equality

play15:05

of access to information that's

play15:08

incredible um you know right now if you

play15:11

if you've got

play15:13

uh you know a very cheap electronic

play15:15

device at an internet internet cafe you

play15:18

can access all of the lectures of MIT

play15:21

for

play15:22

free uh you can access almost any book

play15:25

you can learn

play15:27

anything uh this is is an equality of

play15:30

access to information that was

play15:33

Unthinkable uh even 20 30 years

play15:36

ago um you can teach yourself how to do

play15:39

anything for

play15:41

free that's

play15:43

amazing

play15:46

um maybe there's like too much focus on

play15:49

the things that are unequal but we

play15:51

should we forget about the things that

play15:53

are

play15:54

equal and that have have improved

play15:57

inequality so much

play15:59

like access to

play16:01

information um you know that's one of

play16:03

the things that we're trying to help out

play16:05

with stall link is uh provide access

play16:08

inter internet access to people who

play16:10

don't have internet access or where it's

play16:12

too expensive for them to afford because

play16:15

once you have internet access you can

play16:17

learn anything and you can sell your

play16:20

your your products and

play16:21

services so

play16:25

um I think that's that's pretty amazing

play16:28

I mean you know that's sort of like if

play16:31

if we're going to count our flaws we

play16:33

should also count our

play16:37

blessings I think I think there are some

play16:39

things that we can agree on or most

play16:42

people would agree on are cool and

play16:45

inspiring like um Humanity going to the

play16:49

Moon you know if you ask probably kids

play16:53

almost anywhere in the world what's the

play16:55

coolest thing humans have ever done

play16:59

I think a lot of kids would say we went

play17:02

to the

play17:03

moon you know um and uh I so I think we

play17:09

want to continue that SP of

play17:11

exploration um you know speaking of kind

play17:13

of growing the pie and is is that we we

play17:17

want to I think have a dream that we can

play17:21

be uh a space bearing civilization a

play17:25

multi-planet species a multi-cell

play17:27

species and go out there among the stars

play17:31

and and discover the nature of the

play17:34

universe um that we can collectively

play17:38

seek greater

play17:40

Enlightenment um to better understand

play17:43

this Incredible Universe we live

play17:47

in

play17:48

um I find that very compelling I I think

play17:52

I think most people would find that very

play17:55

compelling you know that I've had some

play17:57

sort of just disturbing conversations

play18:00

with sort of some say nephews uh or some

play18:05

some family members not not my kids but

play18:09

um kids of family members

play18:11

where uh I I was actually shocked to see

play18:14

anti-Semitism or or at

play18:17

least yeah um one disturbing

play18:21

conversation was you know saying that

play18:24

the

play18:25

uh you know that we deserve to have the

play18:28

Trade Towers because of our terrible

play18:30

foreign policy I was like this is what

play18:33

they're teaching you in Elite New York

play18:35

high schools this is messed up well I

play18:37

mean one way that AI could go wrong is

play18:40

if the extinctionist philosophy is

play18:43

programmed into the AI whe whether

play18:46

implicitly or explicitly we're going to

play18:47

go in depth into artificial intelligence

play18:50

which is potentially the biggest

play18:53

civilizational threat and we are

play18:55

currently you know circling the Event

play18:57

Horizon of the black hole that is

play18:59

digital super intelligence The Event

play19:01

Horizon I mean probably not explicitly

play19:03

but there's a strong danger of of an

play19:06

implicit extinctionist philosophy being

play19:08

programmed into AI Elon Musk

play19:10

contemplates the Swift evolution of AI

play19:13

highlighting its Pace compared to

play19:15

traditional annual progress the the the

play19:17

rate of which AI is growing is it really

play19:19

boggles the mind um it currently seems

play19:22

as though the amount of compute

play19:24

dedicated to artificial intelligence is

play19:27

um increasing by a factor of 10 roughly

play19:30

every six months um it's it's faster

play19:32

than annual that's for sure so I

play19:35

recently heard today about a gigawatt

play19:37

class

play19:39

AI uh compute cluster the Paradox arises

play19:44

as AI suppresses hateful content

play19:47

simultaneously raising concerns about

play19:49

the erosion of free speech and this is

play19:51

despite you know showing repeated uh

play19:55

analyses of the system including third

play19:57

party analysis of the system which

play19:59

actually showed that um the number of

play20:02

views of hateful content uh

play20:06

declined so you know the third parties

play20:10

who have all the data analyz and said

play20:12

actually there's less safe speech

play20:14

digital superintelligence akin to a

play20:17

black hole emerges as an unpredictable

play20:19

Force labeled as the singularity by musk

play20:24

you know we'll have the sort of AGI

play20:26

Singularity you know some digal super

play20:28

intelligence is called like a

play20:30

singularity like a black hole because

play20:31

just like with a black hole it's

play20:33

difficult to predict what happens after

play20:34

you pass the Event Horizon of a black

play20:39

hole it's it's really staggering and and

play20:41

for sure so I'm just trying to give a

play20:43

tense of scale it's I've never seen

play20:44

anything move this fast any of any

play20:47

technology this is the fastest moving

play20:49

thing in terms of aiming for AI safety

play20:51

my my best guess of my sort of primitive

play20:54

biological neural man is is that we

play20:56

should aim for maximum truth seeking and

play21:01

and curiosity that that's that's that's

play21:03

my gutfield for this for how to make AI

play21:06

as safe as possible musk's apprehensions

play21:09

intensify as AI development accelerates

play21:11

at an unprecedented rate emphasizing the

play21:14

urgency of safety measures the issue I

play21:16

think with the is not a question of hate

play21:19

speech it's not a question of any

play21:21

semitism obviously uh it's that the ADL

play21:23

um and a lot of other organizations have

play21:26

become activist organizations

play21:28

um which are acting far beyond their uh

play21:32

sted mandate or their original mandate

play21:35

and and I think far beyond what donors

play21:37

to those organizations think they are

play21:38

doing activism intertwines with AI

play21:41

discussions with organizations like the

play21:44

ADL taking on roles that extend beyond

play21:46

their original mandates NE link is is

play21:49

necessarily moved slower than AI because

play21:51

when whenever you put a device in a

play21:53

human you have to be incredibly careful

play21:54

so I I think it's not clear to me that

play21:57

the neural link will be ready before AGI

play21:59

I think AGI is probably going to happen

play22:01

First neuralink Progress while notable

play22:04

Trails behind the rapid advancement of

play22:06

artificial general intelligence posing

play22:09

challenges so this is staggering amount

play22:11

of compute um and and there are many

play22:13

such such things that that's just the

play22:15

biggest one I've heard of so far but

play22:17

there are there's a 500 megawatt

play22:19

installation happening there and there's

play22:20

there's there's multiple 100 100

play22:22

megawatt installations um in the works I

play22:25

I it's not even clear to me what what

play22:26

you do with that much um compute um

play22:30

because when you when you actually add

play22:32

up all human data ever created you

play22:35

really just run out of things to train

play22:37

on very quite quickly um like you you

play22:41

know if you've got maybe I don't know 20

play22:43

or 30,000 h100s you can train on

play22:45

synthetic data almost yeah yeah

play22:48

basically you have to have synthetic

play22:50

data because po certainly well under

play22:52

100,000 h100s you can train on all human

play22:54

data ever created including video a

play22:56

colossal 500 megawatt installation

play22:59

unfolds as a mammoth storage facility

play23:01

harboring vast reserves of synthetic

play23:03

data so I've actually met with a number

play23:07

of world leaders and to talk about AI

play23:12

risk because I think for a lot of people

play23:15

I don't unless you're really em mosted

play23:17

in the technology you don't know just

play23:19

what how significant the risk can be I

play23:22

think the reward is also very positive

play23:24

so I don't want to be you know I'm not I

play23:26

I tend to view the future as a series of

play23:29

pro of probabilities there a certain

play23:31

probability that something will go you

play23:33

know wrong some probability it'll go

play23:34

right it's kind of a spectrum of of

play23:37

things and to the degree that there is

play23:38

Free Will versus determinism then we

play23:41

want to try to exercise that free world

play23:44

to ensure a great future so you know and

play23:49

and the the single biggest rebuttal that

play23:51

I've gotten among leaders in the west

play23:54

with regard to AI is is that well sure

play23:58

the West might regulate AI but what

play24:00

about China because to your point about

play24:03

which countries will have significant

play24:05

leadership in AI China is certainly one

play24:07

of the one of the very top you know

play24:09

potentially number one Elon Musk takes

play24:12

on a role of a Harbinger cautioning

play24:15

global leaders about the perilous

play24:16

trajectory of unchecked AI development

play24:19

so you've got your olymic system your

play24:21

sort of basic drives your cortex which

play24:23

is the thinking and planning and then

play24:25

you have tertiary layer which is your

play24:26

computers your devices your phones

play24:28

laptops all the servers that exist the

play24:31

applications and in fact I think

play24:33

probably a lot of people have found that

play24:35

if you leave your cell phone behind take

play24:37

it away pck yeah if you forget your cell

play24:40

phone it's like missing limb syndrome

play24:42

you know youve like read that thing go

play24:44

losing your cell phone is like missing

play24:45

lens and run so because it is your cell

play24:48

phone is EXT extension of yourself the

play24:50

limitation is bandwidth so you the rate

play24:53

at which you can input or I should say

play24:55

output information into your phone or

play24:57

computer computer is very slow so with a

play24:59

phone it's really just the the speed of

play25:00

your thumb movements and with you know

play25:03

best case scenario you're a speed typist

play25:06

on a keyboard uh but even that data rate

play25:08

is very slow we're talking about tens

play25:10

maybe hundreds of bits per second

play25:12

whereas a computer can communicate in

play25:15

trillions of bits per second so so so

play25:18

the and this is admittedly somewhat of a

play25:20

you know Hail Mary shot or whatever is

play25:22

long is that if you can improve the

play25:24

bandwidth between uh your cortex and the

play25:27

your digital tertiary self then you can

play25:29

achieve better cohesion between what

play25:32

humans want and what AI does at least

play25:35

that's one Theory I'm not saying this is

play25:36

a show thing it's just one potential

play25:39

iron in the fire if if ultimately you

play25:41

know hundreds of millions of billions of

play25:43

people get a high bandwidth interface to

play25:45

their digital tertiary self their AI

play25:47

self effectively then that that seems

play25:50

like that probably leads to a better

play25:52

future for Humanity musk envisions a

play25:55

future where AI optimiz mundane tasks

play25:59

envisioning a symbiotic relationship

play26:01

that uplifts Humanity the danger with

play26:03

programming morality and explicit with

play26:06

an explicit morality program is what is

play26:09

sometimes referred to as the Waluigi

play26:10

problem if you create Luigi you

play26:13

automatically create Waluigi by

play26:14

inverting Luigi so I think we we have to

play26:16

be careful about programming and an

play26:19

arbitrary morality but if if we focus on

play26:22

maximizing truth with acknowledged error

play26:24

that's that's probably I think that's

play26:25

the the way to maximize safety

play26:28

and and also to have the AI be curious

play26:31

cuz I think that you know Earth is much

play26:33

more interesting to an advanced AI with

play26:36

humans on it than without humans the

play26:38

Waluigi problem looms urging a delicate

play26:41

balance in programming morality to guide

play26:43

AI without compromising human values

play26:46

we're at a very interesting juncture in

play26:48

the world from a technology standpoint

play26:50

if you say there's so many things

play26:53

happening if you were to plot the the

play26:54

various types of Technology on a chart

play26:57

you know the modern era and I'd say even

play26:58

just like really the last 20 years

play27:01

certainly the last 100 years from the

play27:03

drawn of human civilization the growth

play27:05

of Technology just looks like a wall

play27:07

it's a technolog is improving at sort of

play27:10

a hyperexponential rate and we obviously

play27:13

want to make sure that the technology is

play27:16

something that benefits humanity and to

play27:20

the greatest extent

play27:22

possible you know and and what would

play27:25

that look like what would that look like

play27:26

well like there's this guy on the front

play27:28

page of New York Times um think about a

play27:30

year ago um he's head of the

play27:31

extinctionist society and he was

play27:33

literally quoted as there are 8 billion

play27:35

people on on Earth it would be better if

play27:36

there were none um oh my God and yeah um

play27:41

so and if you if you take the extreme

play27:44

environmentalist argument especially

play27:46

like the implicit extreme

play27:47

environmentalist argument they they

play27:49

there's an imp implicit conclusion that

play27:52

humans are a plague on the surface of

play27:53

the Earth so I think we have to be quite

play27:56

careful about um and an implicit like

play27:59

like if the extinctionist movement was

play28:02

somehow programmed into AI as as the

play28:04

optimization that would be OB extremely

play28:09

dangerous so I'm trying not to be sort

play28:11

of a whatever a scaremonger or something

play28:14

but when you're talking about having

play28:16

something that is an intelligence far in

play28:18

excess of the smartest human on earth

play28:21

you have to say at that point Who's in

play28:23

charge is it the computers or the humans

play28:26

and you know there there's some

play28:28

interesting ratios that I think are are

play28:30

quite profound like one of them being

play28:32

the ratio of digital to biological

play28:34

compute so you take Al the all the human

play28:37

brains and then all the the computer

play28:39

circuits and you say what's that ratio

play28:42

the ratio of digital to biological

play28:44

computer is increasing dramatically

play28:46

every year because the population of

play28:47

Earth is fairly static but the output of

play28:49

silicon is dramatically increased so

play28:52

basically at a certain point the

play28:53

percentage of compute that will be

play28:55

biological is very small and anyway some

play28:59

of these Technologies like and I'm a

play29:01

technologist and I've gu some

play29:02

responsibility for the creation of

play29:05

artificial intelligence at least you

play29:07

know a little bit and I think we just

play29:09

want to make sure that we're guiding

play29:11

things to a

play29:12

technological you know a positive future

play29:16

and and reduce the probability of a

play29:17

negative

play29:19

one we definitely live in the most

play29:21

interesting times and actually for a

play29:23

while I was kind of depressed about AI

play29:25

but then I I kind of got fatalistic

play29:27

about it and said like well even if even

play29:29

if AI was going to you know end all all

play29:31

Humanity would I prefer to be around to

play29:33

see it or not I I guess I would prefer

play29:35

to be around to see it just out of

play29:37

curiosity but I obviously hopefully AI

play29:40

is extremely beneficial to humanity but

play29:43

but the thing that sort of reconciled me

play29:44

to be less anxious about it was to say

play29:47

well I guess even if it was apocalyptic

play29:49

I'd still be curious to see the it's

play29:52

like you know I be be curious to see

play29:56

it I mean it's it's sort of a funny

play29:58

thing like if you assume like a best

play30:00

case AI scenario imagine if if if you're

play30:02

the AI and you're trying to you just

play30:05

want the human to tell you what it wants

play30:07

just please spit it out but it's

play30:09

speaking so slowly like a tree okay like

play30:13

trees communicate okay they if you watch

play30:15

a tree like a you know sped up version

play30:18

of a tree growing it's actually

play30:20

communicating it's communicating with

play30:22

the soil it's trying to find the

play30:23

sunlight you know it's reacting to other

play30:26

trees and that kind of think very slowly

play30:28

but from a tree standpoint it's you know

play30:31

not that slow so so what I'm saying is

play30:33

we don't want to be a tree that's that's

play30:35

the idea behind a high bandwidth neural

play30:38

interface is just even when the AI

play30:40

desperately wants to do good good things

play30:42

for us that we can actually communicate

play30:44

several orders of magnitude faster than

play30:45

we currently

play30:48

can digital super

play30:50

intelligence that might be the most

play30:52

significant technology that Humanity

play30:54

ever creates um and and it has the

play30:57

potential to be more dangerous than um

play31:00

go up

play31:02

so

play31:07

um you know in the case of pting opening

play31:09

eye it was to have there not be a

play31:12

unipolar world where um Google with its

play31:15

subsidary heat mine uh you know would

play31:19

control an overwhelming amount of AI

play31:21

talent and compute and and resources um

play31:25

which then is somewhat dependent on um

play31:28

basically how how Larry paig um and

play31:31

serge R um and things should go CU they

play31:36

they between three of them or two out of

play31:38

three have control over alphabet CU

play31:41

they've got super voting rights and um

play31:45

you know I was quite con based on some

play31:46

conversations I had with lar Pig uh

play31:49

where um you know you did call me a

play31:52

species for being pro

play31:54

humanity and um so I'm like what side

play31:58

are you

play32:00

on I think generally it would be a good

play32:02

idea to have some kind of AI regulatory

play32:05

agency and you you start off with uh a

play32:08

team that gathers insight to get a

play32:11

maximum understanding then you have some

play32:13

proposed Ru making and then eventually

play32:15

you have regulations that are put in

play32:17

place and this is something we have for

play32:19

everything that is a potential danger to

play32:20

the public so if it is you know Food

play32:23

Food Administration we got aircraft with

play32:25

the FAA and Rockets you know there's

play32:28

every anything that is a danger to the

play32:29

public over time we have learned as

play32:31

often the hard way after many people to

play32:35

have a regulatory agency to protect

play32:38

Public Safety I'm not someone who thinks

play32:39

that regulation is on Panacea where it's

play32:41

only good of course there are some

play32:43

downsides to regulation and that things

play32:45

move bit slower and sometimes you get

play32:48

regulatory capture and that kind of

play32:50

thing but but on balance I think the

play32:53

public would not want to get rid of most

play32:55

Regulatory Agencies and you can think of

play32:58

it also as like the regulatory agency

play33:00

being like a referee you know what

play33:02

sports game doesn't have a referee you

play33:04

need someone to make sure that the that

play33:06

people are playing fairly not not

play33:08

breaking the rules and and that's why

play33:10

basically every sport has a referee of

play33:12

one kind or another so that's the

play33:15

rationale for AI safety and I've been

play33:17

pushing this all around the world and

play33:19

and when I was in China a few months ago

play33:21

meeting with some of the senior

play33:22

leadership but my primary topic was uh

play33:24

AI safety and Regulation and I they they

play33:28

after we had a long discussion agreed

play33:30

that there's Merit to AI regulation and

play33:33

immediately took action in this regard

play33:35

so so so sometimes we'll get this

play33:38

comment of like well if the West does AI

play33:41

regulation surely then what about what

play33:43

if China doesn't and then leaps ahead

play33:45

and I think they they're also taking it

play33:47

very seriously because you know the

play33:49

opposite of whatever a moral constraints

play33:51

you programmed and we are currently you

play33:54

know circling the Event Horizon of the

play33:56

black hole that is digital super

play33:58

intelligence The Event Horizon I mean

play34:00

probably not explicitly but there's a

play34:02

strong danger of of an implicit

play34:04

extinctionist philosophy being

play34:06

programmed into AI as WE peer into the

play34:08

future of AI the pace of its advancement

play34:11

leaves us Spellbound surpassing all

play34:13

expectations and defying the bounds of

play34:15

human

play34:16

imagination well I mean one way that AI

play34:19

could go wrong is if the extinctionist

play34:21

philosophy is programmed into the AI whe

play34:25

whether implicitly or

play34:27

we're going to go in depth into

play34:29

artificial intelligence which is

play34:31

potentially the biggest civilizational

play34:35

threat the integration of AI into social

play34:38

media platforms has ushered in an era of

play34:40

censorship silencing voices deemed

play34:43

unsafe and eroding the foundations of

play34:45

free speech the the the rate of which AI

play34:48

is growing is it really boggles the mind

play34:50

um it currently seems as though the

play34:53

amount of compute dedicated to

play34:55

artificial intelligence is um increasing

play34:58

by a factor of 10 roughly every 6 months

play35:01

um it's it's faster than annual that's

play35:03

for sure so I recently heard today about

play35:06

a gwatt class

play35:09

AI uh compute

play35:14

cluster and this is despite you know

play35:16

showing repeated uh analyses of the

play35:19

system including third party analysis of

play35:22

the system which actually showed that uh

play35:25

the number of uh views of painful

play35:27

content uh

play35:29

declined so you know the third parties

play35:34

have all the data analy and that

play35:36

actually does less save speech

play35:38

contemplating the trajectory of digital

play35:40

super intelligence feels akin to staring

play35:42

into a vast Abyss where the unknown

play35:45

looms large and the consequences remain

play35:47

shrouded in

play35:49

uncertainty you know we'll have the sort

play35:51

of AGI Singularity you know sometimes

play35:54

digital super intelligence is called

play35:55

like a singularity like black hole

play35:57

because just like with a black hole it's

play35:59

difficult to predict what happens after

play36:00

you pass the event rizm of black

play36:05

hole it's it's really staggering and and

play36:07

for sure so I'm just trying to give a

play36:09

sense of scale it's I've never seen

play36:10

anything move this fast any of any

play36:13

technology this is the fastest moving

play36:15

thing in terms of aiming for AI safety

play36:18

my my best guess of my sort of primitive

play36:20

biological neural is is that we should

play36:23

aim for maximum truth seeking and and

play36:27

curiosity that that's that's that's my

play36:30

gutfield for this for how to make AI as

play36:32

safe as possible amidst this Whirlwind

play36:35

of technological progress the Quest for

play36:37

AI safety becomes Paramount calling for

play36:40

an unwavering commitment to truth

play36:42

seeking and curiosity driven

play36:44

exploration the issue I think with the

play36:47

is not a question of hate speech it's

play36:49

not a question obviously uh it's that

play36:52

the ad and a lot of other organizations

play36:55

have become activist

play36:57

organizations um which are acting far

play37:00

beyond their uh stated mandate or their

play37:03

original mandate and and I think far

play37:05

beyond what donors to those

play37:07

organizations think they are

play37:12

doing New link is is necessarily moved

play37:14

slower than AI because when whenever you

play37:17

put a device in a human you have to be

play37:18

incredibly careful so I I think it's not

play37:21

clear to me that the neur link will be

play37:22

ready before AGI I think AGI is probably

play37:25

going to happen first

play37:27

organizations like the ADL have morphed

play37:30

into activist entities straying far from

play37:32

their intended purpose and wielding

play37:34

influence beyond their mandate we're at

play37:36

a very interesting juncture in the world

play37:39

from a technology standpoint if you say

play37:43

there's so many things happening if you

play37:44

were to plot the the various types of

play37:45

Technology on a chart you know the

play37:48

modern era and I'd say even just like

play37:49

really the last 20 years certainly the

play37:52

last 100 years from the drawn of human

play37:54

civilization the growth of Technology

play37:56

just looks like a wall it's a technolog

play37:59

is improving at sort of a

play38:01

hyperexponential rate and we obviously

play38:03

want to make sure that the technology is

play38:07

something that benefits humanity and to

play38:10

the greatest extent

play38:13

possible you know and and what would

play38:15

that look like what would that look like

play38:17

well like there's this guy on the front

play38:18

page of New York Times um about a year

play38:20

ago um he's head up the extinctionist

play38:22

society and he was literally quoted as

play38:24

there are 8 billion people on on Earth

play38:26

it would be better if there were none um

play38:28

oh my God and yeah um so and if if you

play38:33

take the extreme environmentalist

play38:35

argument especially like the implicit

play38:37

extreme environmentalist argument they

play38:39

they there's an imp implicit conclusion

play38:42

that humans are a plague on the surface

play38:44

of the Earth so we I think we have to be

play38:46

quite careful about um an an implicit

play38:49

like like if the extinctionist movement

play38:52

was somehow programmed into AI as as the

play38:55

optimization that be extremely dangerous

play38:58

while AI hurdles forward at break neck

play39:00

speed Endeavors like neuralink proceed

play39:02

with caution mindful of the complexities

play39:05

and ethical considerations inherent in

play39:07

merging technology with the human body

play39:09

so I try not to be sort of a whatever a

play39:12

scare Monger or something but when

play39:14

you're talking about having something

play39:16

that is an intelligence far in excess of

play39:18

the smartest human on earth you have to

play39:20

say at that point Who's in charge is it

play39:23

the computers or the humans and you know

play39:27

there there's some interesting ratios

play39:28

that I think are are quite profound like

play39:31

one of them being the ratio of digital

play39:33

to biological compute so you take Al the

play39:36

all the human brains and all the the

play39:38

computer circuits and you say what's

play39:40

that ratio the ratio of digital to

play39:43

biological computer is increasing

play39:45

dramatically every year because the

play39:46

population of Earth is fairly static but

play39:48

the output of silicon is dramatically

play39:50

increased so basically at a certain

play39:52

point the percentage of compute that

play39:54

will be biological is very small

play39:57

and anyway some of these Technologies

play39:59

like and I'm a technologist and I be

play40:01

some responsibility for the creation of

play40:05

artificial intelligence at least you

play40:06

know a little bit and I think we just

play40:09

want to make sure that we're guiding

play40:10

things to a

play40:12

technological you know a positive future

play40:15

and and reduce the probability of a

play40:17

negative one the exponential growth of

play40:19

Technology demands our vigilance

play40:21

ensuring that its benefits align with

play40:23

the greater good of humanity we

play40:25

definitely live in the most interesting

play40:26

times and actually for a while I was

play40:28

kind of depressed about AI but then I I

play40:30

kind of got fatalistic about it and said

play40:32

like well even if even if AI was going

play40:34

to you know end all all Humanity would I

play40:36

prefer to be around to see it or not I I

play40:39

guess I would prefer to be around to see

play40:40

it just out of curiosity but I obviously

play40:44

hopefully AI is extremely beneficial to

play40:46

humanity but but the thing that sort of

play40:48

reconciled me to be less anxious about

play40:50

it was to say well I guess even if it

play40:53

was apocalyptic I'd still be curious to

play40:55

see the it's like you know I be curious

play40:58

to see

play41:00

it I mean it's it's sort of a funny

play41:02

thing like if you assume like a best

play41:04

case a scenario imagine if if if you're

play41:06

the AI and you're trying to you you just

play41:09

want the human to tell you what it wants

play41:11

just please spit it out but it's

play41:13

speaking so slowly like a tree okay like

play41:17

trees communicate okay they if you watch

play41:19

a tree like a you know sped up version

play41:22

of a tree growing it's actually

play41:24

communicating it's communicating with

play41:26

soil it's trying to find the sunlight

play41:28

you know it's reacting to other trees

play41:30

and that kind of thing very slowly but

play41:33

from a tree standpoint it's you know not

play41:35

that slow so so what I'm saying is we

play41:37

don't want to be a tree that's that's

play41:39

the idea behind a high band with neural

play41:42

interface is just in even when the AI

play41:44

desperately wants to do good good things

play41:46

for us that we can actually communicate

play41:48

several orders of magude faster than we

play41:49

currently

play41:50

[Music]

play41:52

Canal super

play41:54

intelligence I might be the most most

play41:56

significant technology that Humanity

play41:57

ever creates um and it has the potential

play42:01

to be more dangerous than um weapons

play42:06

so

play42:10

um you know the case of open the ey

play42:13

there was to have they not be a unipolar

play42:16

world where um Google with its subsidary

play42:19

deep mind uh you know would control an

play42:23

overwhelming amount of AI talent and

play42:26

hudes and and resources um which then is

play42:29

somewhat dependent on basically how how

play42:33

Larry Pig uh and

play42:35

Sergey um and er believe things should

play42:39

go they they between three of them or

play42:41

two out of three have control over

play42:44

alphabet CU they've got super voting

play42:46

rights and um you know I was quite

play42:49

conserv to some conversations I had with

play42:51

L paig uh where um you know you did call

play42:55

me a species

play42:56

being pro

play42:57

humanity and

play43:00

um so I'm like what side are you on L we

play43:03

tread a precarious path wary of the

play43:05

potential for AI to embody existential

play43:08

threats such as the extinction movement

play43:10

underscoring the need for conscientious

play43:12

oversight and regulation I think

play43:14

generally it would be a good idea to

play43:16

have some kind of AI regulatory agency

play43:19

and you start off with uh a team that

play43:22

gathers insight to uh get maximum

play43:25

understanding you have some proposed rul

play43:27

making and then eventually you have

play43:30

regulations that are put in place and

play43:31

this is something we have for everything

play43:33

that is potential danger to the public

play43:34

so if it is you know we Administration

play43:37

we got aircraft with the FAA and Rockets

play43:40

you know there every anything that is a

play43:42

danger to the public over time we have

play43:44

learned as often the hard way after many

play43:47

people to have a regulatory agency to

play43:50

protect Public Safety I'm not someone

play43:53

who thinks that regulation is s Panacea

play43:55

where it's only good of course there are

play43:56

some downsides to regulation and that

play43:59

things move a bit slower and sometimes

play44:01

you get regulatory capture and that kind

play44:03

of thing but but on balance I think the

play44:06

public would not want to get rid of most

play44:09

Regulatory Agencies and you can think of

play44:11

it also as like the regulatory agency

play44:14

being like a referee you know what

play44:16

sports game doesn't have a referee you

play44:18

need someone to make sure that the that

play44:20

people are playing fairly not not

play44:22

breaking the rules and and that's why

play44:24

basically every sport has a for of one

play44:26

kind or another so that's the rationale

play44:29

for AI safety and I've been pushing this

play44:31

all around the world and and when I was

play44:33

in China a few months ago meeting with

play44:35

some of the senior leadership but my

play44:36

primary topic was uh AI safety and

play44:39

Regulation and I they they after we had

play44:42

a long discussion agreed that there's

play44:44

Merit to AI regulation and immediately

play44:47

took action in this regard so so so some

play44:51

we'll get this comment of like well if

play44:53

the West does AI regulation surely then

play44:56

what about what if China doesn't and

play44:57

then leaps ahead and I think they

play44:59

they're also taking it very seriously

play45:01

because you know the opposite of

play45:03

whatever a moral constraints you

play45:04

programmed into the

play45:07

system so this is a staggering amount of

play45:09

compute um and and there are many such

play45:12

such things that that's just the biggest

play45:13

one I've heard of so far but there are

play45:15

there's a 500 megawatt installation

play45:17

happening there and there's there's

play45:19

there's multiple 100 100 megawatt

play45:21

installations um in the works I I it's

play45:23

don't even clear to me what what you do

play45:25

with that much

play45:26

um compute um cuz when you when you

play45:30

actually add up all human data ever

play45:32

created you really just run out of

play45:34

things to train on very quite quickly um

play45:38

like you you know if you've got maybe I

play45:40

don't know 20 or 30,000 h100s you can

play45:43

train on synthetic data almost yeah yeah

play45:46

you basically you have to have have

play45:47

synthetic data because po simp well

play45:50

under 100,000 h100s you can train on all

play45:52

human data ever created including video

play45:54

as AI outpaces is human advancement the

play45:57

growing Chasm between technological

play45:59

progress and societal Evolution raises

play46:01

concerns about the balance of power in

play46:04

an AI dominated

play46:05

world so I've actually met with a number

play46:09

of world leaders and to talk about AI

play46:14

risk because I think for a lot of people

play46:17

I don't unless you're really immersed in

play46:19

the technology you don't know just what

play46:22

how significant the risk can be I think

play46:24

the reward is also very positive so I

play46:26

don't want to be you know I'm not I I

play46:28

tend to view the future as a series of

play46:31

pro of probabilities there certain

play46:33

probability that something will go you

play46:35

know wrong some probability it'll go

play46:37

right it's kind of a spectrum of of

play46:39

things and to the degree that there is

play46:40

Free Will versus determinism then we

play46:43

want to try to exercise that free will

play46:46

to ensure a great future so you know and

play46:51

and the the single biggest rebuttal that

play46:53

I've gotten among leaders in the world

play46:55

West with regard to AI is that well sure

play47:00

the West might regulate AI but what

play47:02

about China because your point about

play47:05

which countries will have significant

play47:07

leadership in AI China is certainly one

play47:09

of them one of the very top you know

play47:12

potentially number

play47:14

one so you've got your olymic system

play47:17

your sort of basic drives your cortex

play47:20

which is the thinking and planning and

play47:21

then you have tertiary layer which is

play47:23

your computers your devices your phones

play47:25

laptops all the servers that exist the

play47:27

applications and in fact I think

play47:29

probably a lot of people have found that

play47:32

if you leave your cell phone behind away

play47:35

panicky yeah if you forget your cell

play47:36

phone it's like missing limb syndrome

play47:38

you know you like where'd that thing go

play47:41

losing your cell phone is like missing

play47:42

lens so because it is your cell phone is

play47:45

EXT extension of yourself the limitation

play47:47

is bandwidth so you the rate at which

play47:50

you can input or I should say output

play47:52

information into your phone or computer

play47:54

is very slow so with a phone it's really

play47:56

just the the speed of your thumb

play47:57

movements and with you know best case

play48:00

scenario you're a speed typist on a

play48:02

keyboard uh but even that data rate is

play48:04

very slow we're talking about tens maybe

play48:07

hundreds of bits per second whereas a

play48:10

computer can communicate in trillions of

play48:12

bits per second so so so the and this is

play48:15

admittedly somewhat of a you know Hail

play48:17

Mary sh or

play48:18

whatever is that if you can improve the

play48:20

bandwidth between uh your cortex and the

play48:23

your digital tertiary self then you can

play48:26

achieve better cohesion between what

play48:29

humans want and what AI does at least

play48:31

that's one Theory I'm not saying this is

play48:32

a show thing it's just one potential

play48:35

iron in the fire if if ultimately you

play48:38

know hundreds of millions of billions of

play48:39

people get a high bandwidth interface to

play48:42

their digital tery self their AI self

play48:44

effectively then that that seems like

play48:47

that probably leads to a better future

play48:49

for Humanity Elon musk's apprehension

play48:52

about the future of AI is matched only

play48:55

by his curiosity as he grapples with the

play48:57

Dual possibilities of Salvation and

play48:59

Annihilation that AI presents for

play49:02

Humanity the danger with programming

play49:04

morality and explicit with an explicit

play49:07

morality program is what is sometimes

play49:09

referred to as the Waluigi problem if

play49:11

create Luigi you automatically create

play49:14

Waluigi by inverting Luigi so I think we

play49:16

we have to be careful about programming

play49:18

in an arbitrary morality but if if we

play49:21

focus on maximizing truth with

play49:23

acknowledged error that's that's

play49:25

probably I think that's the the way to

play49:27

maximize safety and and also to have the

play49:30

AI be curious cuz I think that you know

play49:33

Earth is much more interesting to an

play49:35

advanced AI with humans on it than

play49:37

without

play49:45

humans

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Do you need a summary in English?