i'm EXPOSING this NO MATTER what.
Summary
TLDRThe transcript covers a wide range of topics related to AI, its development, and potential impacts. Musk discusses the pace of AI advancement, the need to balance inequality with recognizing common ground, the quest for AI safety amid rapid progress, and more. He contemplates existential threats like the extinctionist philosophy being implicitly programmed into AI. Overall, Musk grapples with the duality of AI possibly bringing salvation or annihilation for humanity.
Takeaways
- 🤖 AIは、核兵器や地球温暖化と並び、文明を破壊する潜在的な能力を持つ。
- 🚀 デジタルインテリジェンスは生物学的インテリジェンスを大きく上回ることが明白。
- 📱 AIは、効果的なプロパガンダを作成し、選挙や社会の方向性に影響を与えるために使用される可能性がある。
- 🕊️ 人類は、存在的脅威よりも日常の出来事に注意を払いがち。
- 🔍 歴史における人類の進歩は、技術の急速な発展によって、現代では前例のない繁栄を享受している。
- 💡 教育システムの改善は、子供たちの将来と平和の実現に不可欠。
- 🌍 AIの安全性を確保するためには、真実追求と好奇心を最大化することが重要。
- 🛰️ 現代の技術、特にAIは、人類の歴史の中で最も速い速度で進化している。
- 📡 社会は、技術的進歩と社会的進化の間の成長するギャップに直面している。
- 🕹️ AIの発展は、人類にとって救済または破滅の両方の可能性を秘めている。
Q & A
AIが文明を破壊する可能性はありますか?
-はい、AIは文明を破壊する潜在能力を持っているとイーロン・マスクは述べています。特に、顔認証チップを使った暗殺ドローンの群れを作成することが現在の技術でも可能であり、これは文明に対する直接的な脅威となり得ます。
AIが人類にとって脅威となる理由は何ですか?
-AIが脅威となる主な理由は、その能力が人間の知能を大幅に超える可能性があるためです。これにより、AIが制御不可能になり、人類に対して予測不能な行動を取る可能性があります。
AIによるプロパガンダはどのように機能しますか?
-AIは、メッセージを精緻化し、ソーシャルメディアのフィードバックを即座に分析して、そのメッセージを改善することで、非常に効果的なプロパガンダを作成することができます。これにより、社会の方向性や選挙の結果に影響を与えることが可能になります。
AIの安全性を確保するために何が必要ですか?
-イーロン・マスクは、AIの安全性を確保するためには、真実追求と好奇心を最大化することが重要であると述べています。また、規制や安全対策の迅速な導入も必要とされています。
AI技術の発展速度はどの程度ですか?
-AI技術は非常に迅速に発展しており、その計算能力は約6ヶ月ごとに10倍に増加しているとされています。この速度は、人類が対応するにはあまりにも速く、規制や安全対策を講じる上で大きな課題となっています。
AIによる検閲はどのような問題を引き起こしますか?
-AIによる検閲は、不適切と見なされた内容の抑制を通じて、言論の自由の基礎を侵食する可能性があります。これは、社会におけるオープンな議論や情報の流通に悪影響を与える恐れがあります。
ネットワーク接続の改善がAIとの関係にどのように影響するとイーロン・マスクは述べていますか?
-イーロン・マスクは、人間とAIの間の帯域幅を改善することによって、人間の意志とAIの行動の間により良い調和をもたらすことができると述べています。これにより、人間とAIのより良い統合が可能になると考えられています。
AI規制に関してイーロン・マスクはどのような提案をしていますか?
-イーロン・マスクは、AI技術の潜在的な危険性を管理するために、AI規制機関を設立することを提案しています。この規制機関は、公共の安全を保護するために、AI技術の開発と利用を監視する責任を負うことになります。
AIと人間の関係において帯域幅の問題はどのように影響しますか?
-帯域幅の問題は、人間がAIやコンピューターに情報を送信する速度が非常に遅いことを意味します。これにより、人間とAIの間のコミュニケーションの効率性が低下し、統合の障害となります。
イーロン・マスクはAIの未来についてどのように感じていますか?
-イーロン・マスクはAIの未来に対して二重の感情を抱いています。一方で、AIが人類にとって非常に有益である可能性があると信じていますが、他方で、その発展がもたらす潜在的な危険性にも強く懸念を表明しています。
Outlines
🤖 AIの脅威と人類の未来
この段落では、イーロン・マスクがAI(人工知能)に対する懸念を表明しています。彼はAIが文明を破壊する可能性があると指摘し、デジタルインテリジェンスが生物学的インテリジェンスを大幅に上回ること、そしてAIが非常に効果的なプロパガンダを作成し、社会や選挙に影響を与える能力について議論しています。さらに、現代社会がこの種の存在脅威に対してどれほど無視しているかについても触れています。
🌍 社会的正義と技術の進歩
第二段落では、マスクが社会的正義とテクノロジーの関係に焦点を当てています。彼は、人類が過去に比べて今日はるかに繁栄していることを指摘し、経済的な成長が可能であることを強調しています。また、彼は技術的進歩がもたらす平等な機会、特に情報へのアクセスの平等化について語り、性別や信念に関係なく個人を能力に基づいて評価することの重要性を強調しています。
🚀 人類の未来と宇宙探査
第三段落では、マスクが人類の未来と宇宙探査についてのビジョンを共有しています。彼は、情報へのアクセスが増加し、病気の治療法が改善されている現代の技術的進歩を肯定的に評価しています。さらに、彼は人類が多惑星種として宇宙を探索することの重要性を強調し、技術的進歩が人類にもたらす可能性のある利益について議論しています。
🔬 AI技術の急速な進化
この段落では、マスクがAIの急速な発展とその潜在的なリスクについて語っています。彼はAIの計算能力が6ヶ月ごとに10倍に増加していること、そしてAIが人類にとって存在する最大の脅威である可能性を指摘しています。彼はまた、AIがヘイトスピーチの内容を減少させたにも関わらず、フリースピーチへの懸念が高まっていることに言及しています。
🌐 デジタルスーパーインテリジェンスとその影響
第五段落では、マスクがデジタルスーパーインテリジェンスの概念と、それが人類にとって持つ可能性のある未知の影響について語っています。彼はこの技術をブラックホールに例え、その未来が予測不可能であることを強調しています。彼はAIの安全性を確保するために真実を追求し、好奇心を促すことの重要性を強調しています。
💡 人間とAIの将来的な関係
第六段落では、マスクが人間とAIの将来的な関係についての考えを共有しています。彼は人間とデジタルデバイス間の帯域幅を改善することで、人類がAIとより調和的な関係を築けると提案しています。また、AIが倫理的な基準に従ってプログラムされるべきであるとしながらも、その複雑さと潜在的なリスクを指摘しています。
🌏 AI規制と公共の安全
最終段落では、マスクがAIの規制と公共の安全に関する見解を述べています。彼はAIが公共に対して潜在的な危険をもたらすため、規制が必要であると主張しています。また、彼は世界のリーダーたちとAIのリスクについて話し合っており、特に中国がAI規制を真剣に検討していることを指摘しています。
Mindmap
Keywords
💡AIの安全性
💡デジタルスーパーインテリジェンス
💡絶滅主義哲学
💡プロパガンダ
💡AIによる検閲
💡リニア対指数関数的脅威
💡モラル絶対主義
Highlights
AI's potential to surpass human intelligence and pose existential threats to civilization.
AI could relegate humanity to a minor role, similar to the impact of Homo sapiens on other primates.
Current technology allows for the creation of autonomous drones capable of targeted assassinations.
AI's effectiveness in creating highly persuasive propaganda, influencing societal views and elections.
The exponential pace of AI development outstripping linear regulatory responses.
The need to reconsider the assumption that the weaker party is always morally right.
Elon Musk stresses the importance of addressing educational indoctrination for a peaceful future.
The significance of moral absolutism in evaluating actions and intentions.
Humanity's unprecedented access to knowledge and the need for internet access to empower global learning.
The critique of legacy media and its competition with modern information platforms.
The call for a merit-based society that values skills and accomplishments over identity.
The dangers of AI-driven censorship on social media platforms.
The potential for AI to be programmed with an extinctionist philosophy.
The challenge of ensuring AI aligns with human values and ethics.
The concept of digital superintelligence as a transformative or potentially destructive force.
Transcripts
AI will destroy Humanity we had nuclear
bombs which are could potentially
destroy civilization obviously uh we
have ai which could destroy civilization
uh we have global warming which could
destroy civilization or or at least
severely
disrupt civilization digital
intelligence will exceed biological
intelligence by a substantial margin
it's obvious I'm not paying attention we
worry more about what what name somebody
called someone else than whether AI will
destroy Humanity that's insane like
children in a playground Humanity really
is not evolved to think of existential
threats in general we're evolved to
think about things that are very close
to us near term to to be upset with
other humans and and not not really to
think about things that could destroy
Humanity as a whole um excuse me how
could AI destroy civilization you know
it would be something in the same way
that humans destroyed the habitat of
primates I mean it's it wouldn't
necessarily be destroyed but we might be
relegated to a small corner of the world
when Homo sapiens became much smarter
than other primates I pushed all the
other ones into small habitats they're
just in the way could an AI even in this
moment just with the technology that we
have before us be used in some fairly
destructive ways you can make a swarm of
assassin drones for very little money by
just taking the the the face ID chip
that's used in cell phones and uh having
a small explosive charge and and a
standard drone and have them just do a
grid sweep of the building until they
find the person looking for Ram into
them and and explode you can do that
right now no extra no new technologies
needed right now probably a bigger risk
than than being hunted down by a drone
is that uh AI would be used to make
incredibly effective
propaganda uh that would not seem like
propaganda so these are deep FS yeah
influence
the direction of society influence
elections artificial intelligence just
hones the message hones the message
check looks the feed looks at the
feedback makes this message slightly
better within milliseconds it could it
can um adapt its message and and shift
and react to news and and there's so
many uh social media accounts out there
that are not people like how do how do
you know it's a person not a
person one reason that regulators and
others are a little bit in denial about
this is the speed the pace of change
what is the consequence of that speed of
change the way in which a regulation is
put in place is slow and linear right
and we are facing an exponential threat
and if you if you have a linear response
to an exp exponential threat it's quite
likely the exponential threat will win
well I I think we really need to to stop
this principle that the the weaker nor
normally weaker part is always right
this is simply not true we just we have
to get rid of the rule that that if
you're weaker you're automatically good
that's that's obviously makes no sense
the crowd falls into an eerie silence as
Elon Musk delivers a stark warning about
the current state of the world capturing
everyone's attention we have many things
today that we did not have in the past
um we are far more prosperous uh all of
humanity is far more prosperous today
than it was at the times in the
fast yeah I think generally people
should always be wary that they may have
um e either consciously or press mostly
subconsciously internalize the notion of
a a zero sum game or a fixed P um and if
if you internalize that that
there's that everything's Zero Sum
meaning like in order for me to get
ahead someone else has to not get ahead
um or for me to have stuff someone else
must not have stuff Elon Musk shocks the
Audience by highlighting the pervasive
violence against the innocent that is
unfolding in the world prompting
reflection and
concern I think maybe the most shocking
thing was to
see uh the the Delight in
innocent people like the Delight in kids
and defenseless woman and man and there
it there was no remorse quite the
opposite in a cult to action Elon
emphasizes the urgent need to fix the
education system for the sake of our
children's future a plea that resonates
profoundly that fundamentally has to be
addressed or there will not be peace uh
the the the education of kids
and um the indoctrination of hate into
kids and has to has to
stop if you have that axiomatic flaw
then then that then that's what what it
needs to be done is to to fix that
acatic flow because it is false um there
it's not a zero sum game we can
absolutely grow and have grown and the
evidence is overwhelming and we have
grown the output of goods and
services I mean that requires a level of
indoctrination that is uh extremely
intense
um so so I think the to solve that you
have to address the source of the
indoctrination cuz no one no one should
ever be glad about some some
child you know when I was in I was like
that was my top recommendation is like
you got to make
sure um you know I understand the need
for
this to to invade and unfortunately Su
of people will there's no way around it
Elon delves into the complex ities of
morality asserting that there's both
good and bad in absolute terms
challenging the audience to consider
absolute moral standards um if you are
in courts oppressed or or the weaker
party it doesn't mean you're right um
because if some of those you know we
weaker uh groups want to annihilate you
that does not make them
good
um you know you know it often makes
sense where it's like okay you don't
want to beat up with someone's smaller
and weaker than you um but if that if
that smaller group wants to you that
they're
bad
okay um I mean I'm a big believer in
moral absolutism not moral relativism
there is there's good and bad in the
absolute um and you judge any group or
individual against absolute moral
standards not whether they they're the
so-called oppressed or oppressor just on
absolute moral terms are they doing good
things do they want toip some people
that's bad doesn't matter who they are
Elon characterizes the present era as
the most interesting of times inviting
the crowd to ponder the profound shifts
and challenges facing Society I mean it
wasn't that long ago where you know we
would count a good year as one where
well the bonic plague wasn't that bad
only 10
ENT um you know we uh not that many
people stared through the winter um we
only lost you know 5% due of our
population due to raids from other
tribes you know basically life used to
be very rough in the old days um and uh
it's if if they could see us now they'd
be like what are you guys complaining
about this is
amazing um you know not having to worry
about um food food for I mean we were we
were food constrained uh for you know
probably the last 100,000 years until
recently so you know really the the
present day future is is amazing
compared to the past and anyone who
doesn't think it's amazing is not a good
student of
History um so I think we live in the
most interesting of times and probably
the best of times musk critiques Legacy
Media pointing out their ponant for
attempting to cancel certain platforms
raising questions about the freedom of
expression well I mean the reality is
that X is competition for the Legacy
Media so
uh you know X is is where people go to
get the most current news and learn
about the world so leg you know the the
Legacy Media is our direct competitors
so they're really going to find trying
to every angle to try to cancel X it's
that's I mean if you want to know why
things are happening look at the
incentives you know so and and Legacy
Media had a tough time with respect to
uh usage um the numbers I saw was that
the sort of traditional print uh cable
television uh viewership went down
something like 20 30% last year on the
other hand X went up roughly that same
roughly 20 30% so it's a direct
competition for people's attention
so if there's some attack they can Lev
Levy against me they will the Visionary
entrepreneur advocates for a return to
merit-based evaluation urging Society to
judge individuals based on their
competence not on factors like gender or
belief I think we need to return to what
it what where things were or mostly were
which is a focus on on Merit and and it
doesn't matter whether you're a man
woman
uh you know what race you are what
beliefs you have what matters is you
know how good are you at your job or or
how what are your skills you know um you
know you could be a three-legged green
moan uh you know wears a kimona and
drinks the ax milk who cares it doesn't
matter you know um it what matters is
like how good is your work that's it
um that that that's that's the that's
the least sort of racist you can be is
just care about the work that somebody
does and not anything else um that's
that's that's what the focus needs to
be to return
to it really has come completely full
circle from um or or 180° from what has
historically been the case so through
most of History the operating principle
has been uh might makes right
so yeah for really up until modern times
uh might makes right was the if you were
stronger you were right um now now we've
sort of flipped it to know if you're
weaker you're right but but but neither
is true there is there is uh rightness
independent of strength or weakness
um just because somebody's strong
doesn't mean they're right and doesn't
because somebody's weak doesn't mean
they're right you have to look at morals
in the
absolute musk highlights the need to
counteract indoctrination that
negatively influences children
underlining the importance of fostering
critical thinking and independent
perspectives but the the the most
important thing is to ensure that
afterwards that uh the indoctrination
where kids are taught from as soon as
they
can uh understand language that their
goal is
to and and if you're told that from when
you're a toddler well you're going to
believe it and that needs to
stop I I think it is actually human
nature to love Humanity unless you are
indoctrinated
otherwise so uh I think the actual
default for most people is to love
Humanity um and to love being around
their fellow humans um you can take for
example like what's one of the worst
punishments in in prison is solitary
confinement and all solitary confinement
means is that you're you're you don't
get to hang out with the other prisoners
which which might not be the best group
of people to hang out with um but even
that is considered a terrible punishment
to not be able to hang out with other
prisoners so in Truth uh I by I think in
our nature we all love Humanity unless
we are indoctrinated otherwise and so we
have to stop that
indoctrination Elon Musk encourages
seizing the unprecedented access to
Global Knowledge emphasizing the
transformative power of information
available at our
fingertips well um I do I do put post a
lot on the
xplatform um you know sometimes 100
times a day so in once in a while I'll
do something dumb um for sure
um but I I I really um you know I I try
to say things that I think are
interesting or funny um I mean there
must be some reason why 169 million
people follow
me I guess I don't know um I must be
keeping them amused in some way um so
amuse Entertain You
know have opinions on something
sometimes they're wrong sometimes
they're right um
and um you know for things like
Community notes it applies to me as well
as it applies to anyone else so if I say
something that's incorrect or you know
not full context then Community notes
will correct me very quickly
so
um but it's only me doing these posts
ever I don't have a team or anything uh
so uh in fact I generally would
recommend for leaders of the world to
just literally post your own
stuff and once in a while you make a
mistake don't worry about it in a
thought provoking moment musk suggests
that societal focus should balance
addressing inequality with recognizing
areas of Common Ground challenging
prevailing perspectives this and there's
many wonderful interesting things that
are happening besides space exploration
obviously as time goes by we improve our
ability to cure cancer to cure many
diseases um there's increased access to
information and people talk a lot about
inequality but what about the equality
of access to information that's
incredible um you know right now if you
if you've got
uh you know a very cheap electronic
device at an internet internet cafe you
can access all of the lectures of MIT
for
free uh you can access almost any book
you can learn
anything uh this is is an equality of
access to information that was
Unthinkable uh even 20 30 years
ago um you can teach yourself how to do
anything for
free that's
amazing
um maybe there's like too much focus on
the things that are unequal but we
should we forget about the things that
are
equal and that have have improved
inequality so much
like access to
information um you know that's one of
the things that we're trying to help out
with stall link is uh provide access
inter internet access to people who
don't have internet access or where it's
too expensive for them to afford because
once you have internet access you can
learn anything and you can sell your
your your products and
services so
um I think that's that's pretty amazing
I mean you know that's sort of like if
if we're going to count our flaws we
should also count our
blessings I think I think there are some
things that we can agree on or most
people would agree on are cool and
inspiring like um Humanity going to the
Moon you know if you ask probably kids
almost anywhere in the world what's the
coolest thing humans have ever done
I think a lot of kids would say we went
to the
moon you know um and uh I so I think we
want to continue that SP of
exploration um you know speaking of kind
of growing the pie and is is that we we
want to I think have a dream that we can
be uh a space bearing civilization a
multi-planet species a multi-cell
species and go out there among the stars
and and discover the nature of the
universe um that we can collectively
seek greater
Enlightenment um to better understand
this Incredible Universe we live
in
um I find that very compelling I I think
I think most people would find that very
compelling you know that I've had some
sort of just disturbing conversations
with sort of some say nephews uh or some
some family members not not my kids but
um kids of family members
where uh I I was actually shocked to see
anti-Semitism or or at
least yeah um one disturbing
conversation was you know saying that
the
uh you know that we deserve to have the
Trade Towers because of our terrible
foreign policy I was like this is what
they're teaching you in Elite New York
high schools this is messed up well I
mean one way that AI could go wrong is
if the extinctionist philosophy is
programmed into the AI whe whether
implicitly or explicitly we're going to
go in depth into artificial intelligence
which is potentially the biggest
civilizational threat and we are
currently you know circling the Event
Horizon of the black hole that is
digital super intelligence The Event
Horizon I mean probably not explicitly
but there's a strong danger of of an
implicit extinctionist philosophy being
programmed into AI Elon Musk
contemplates the Swift evolution of AI
highlighting its Pace compared to
traditional annual progress the the the
rate of which AI is growing is it really
boggles the mind um it currently seems
as though the amount of compute
dedicated to artificial intelligence is
um increasing by a factor of 10 roughly
every six months um it's it's faster
than annual that's for sure so I
recently heard today about a gigawatt
class
AI uh compute cluster the Paradox arises
as AI suppresses hateful content
simultaneously raising concerns about
the erosion of free speech and this is
despite you know showing repeated uh
analyses of the system including third
party analysis of the system which
actually showed that um the number of
views of hateful content uh
declined so you know the third parties
who have all the data analyz and said
actually there's less safe speech
digital superintelligence akin to a
black hole emerges as an unpredictable
Force labeled as the singularity by musk
you know we'll have the sort of AGI
Singularity you know some digal super
intelligence is called like a
singularity like a black hole because
just like with a black hole it's
difficult to predict what happens after
you pass the Event Horizon of a black
hole it's it's really staggering and and
for sure so I'm just trying to give a
tense of scale it's I've never seen
anything move this fast any of any
technology this is the fastest moving
thing in terms of aiming for AI safety
my my best guess of my sort of primitive
biological neural man is is that we
should aim for maximum truth seeking and
and curiosity that that's that's that's
my gutfield for this for how to make AI
as safe as possible musk's apprehensions
intensify as AI development accelerates
at an unprecedented rate emphasizing the
urgency of safety measures the issue I
think with the is not a question of hate
speech it's not a question of any
semitism obviously uh it's that the ADL
um and a lot of other organizations have
become activist organizations
um which are acting far beyond their uh
sted mandate or their original mandate
and and I think far beyond what donors
to those organizations think they are
doing activism intertwines with AI
discussions with organizations like the
ADL taking on roles that extend beyond
their original mandates NE link is is
necessarily moved slower than AI because
when whenever you put a device in a
human you have to be incredibly careful
so I I think it's not clear to me that
the neural link will be ready before AGI
I think AGI is probably going to happen
First neuralink Progress while notable
Trails behind the rapid advancement of
artificial general intelligence posing
challenges so this is staggering amount
of compute um and and there are many
such such things that that's just the
biggest one I've heard of so far but
there are there's a 500 megawatt
installation happening there and there's
there's there's multiple 100 100
megawatt installations um in the works I
I it's not even clear to me what what
you do with that much um compute um
because when you when you actually add
up all human data ever created you
really just run out of things to train
on very quite quickly um like you you
know if you've got maybe I don't know 20
or 30,000 h100s you can train on
synthetic data almost yeah yeah
basically you have to have synthetic
data because po certainly well under
100,000 h100s you can train on all human
data ever created including video a
colossal 500 megawatt installation
unfolds as a mammoth storage facility
harboring vast reserves of synthetic
data so I've actually met with a number
of world leaders and to talk about AI
risk because I think for a lot of people
I don't unless you're really em mosted
in the technology you don't know just
what how significant the risk can be I
think the reward is also very positive
so I don't want to be you know I'm not I
I tend to view the future as a series of
pro of probabilities there a certain
probability that something will go you
know wrong some probability it'll go
right it's kind of a spectrum of of
things and to the degree that there is
Free Will versus determinism then we
want to try to exercise that free world
to ensure a great future so you know and
and the the single biggest rebuttal that
I've gotten among leaders in the west
with regard to AI is is that well sure
the West might regulate AI but what
about China because to your point about
which countries will have significant
leadership in AI China is certainly one
of the one of the very top you know
potentially number one Elon Musk takes
on a role of a Harbinger cautioning
global leaders about the perilous
trajectory of unchecked AI development
so you've got your olymic system your
sort of basic drives your cortex which
is the thinking and planning and then
you have tertiary layer which is your
computers your devices your phones
laptops all the servers that exist the
applications and in fact I think
probably a lot of people have found that
if you leave your cell phone behind take
it away pck yeah if you forget your cell
phone it's like missing limb syndrome
you know youve like read that thing go
losing your cell phone is like missing
lens and run so because it is your cell
phone is EXT extension of yourself the
limitation is bandwidth so you the rate
at which you can input or I should say
output information into your phone or
computer computer is very slow so with a
phone it's really just the the speed of
your thumb movements and with you know
best case scenario you're a speed typist
on a keyboard uh but even that data rate
is very slow we're talking about tens
maybe hundreds of bits per second
whereas a computer can communicate in
trillions of bits per second so so so
the and this is admittedly somewhat of a
you know Hail Mary shot or whatever is
long is that if you can improve the
bandwidth between uh your cortex and the
your digital tertiary self then you can
achieve better cohesion between what
humans want and what AI does at least
that's one Theory I'm not saying this is
a show thing it's just one potential
iron in the fire if if ultimately you
know hundreds of millions of billions of
people get a high bandwidth interface to
their digital tertiary self their AI
self effectively then that that seems
like that probably leads to a better
future for Humanity musk envisions a
future where AI optimiz mundane tasks
envisioning a symbiotic relationship
that uplifts Humanity the danger with
programming morality and explicit with
an explicit morality program is what is
sometimes referred to as the Waluigi
problem if you create Luigi you
automatically create Waluigi by
inverting Luigi so I think we we have to
be careful about programming and an
arbitrary morality but if if we focus on
maximizing truth with acknowledged error
that's that's probably I think that's
the the way to maximize safety
and and also to have the AI be curious
cuz I think that you know Earth is much
more interesting to an advanced AI with
humans on it than without humans the
Waluigi problem looms urging a delicate
balance in programming morality to guide
AI without compromising human values
we're at a very interesting juncture in
the world from a technology standpoint
if you say there's so many things
happening if you were to plot the the
various types of Technology on a chart
you know the modern era and I'd say even
just like really the last 20 years
certainly the last 100 years from the
drawn of human civilization the growth
of Technology just looks like a wall
it's a technolog is improving at sort of
a hyperexponential rate and we obviously
want to make sure that the technology is
something that benefits humanity and to
the greatest extent
possible you know and and what would
that look like what would that look like
well like there's this guy on the front
page of New York Times um think about a
year ago um he's head of the
extinctionist society and he was
literally quoted as there are 8 billion
people on on Earth it would be better if
there were none um oh my God and yeah um
so and if you if you take the extreme
environmentalist argument especially
like the implicit extreme
environmentalist argument they they
there's an imp implicit conclusion that
humans are a plague on the surface of
the Earth so I think we have to be quite
careful about um and an implicit like
like if the extinctionist movement was
somehow programmed into AI as as the
optimization that would be OB extremely
dangerous so I'm trying not to be sort
of a whatever a scaremonger or something
but when you're talking about having
something that is an intelligence far in
excess of the smartest human on earth
you have to say at that point Who's in
charge is it the computers or the humans
and you know there there's some
interesting ratios that I think are are
quite profound like one of them being
the ratio of digital to biological
compute so you take Al the all the human
brains and then all the the computer
circuits and you say what's that ratio
the ratio of digital to biological
computer is increasing dramatically
every year because the population of
Earth is fairly static but the output of
silicon is dramatically increased so
basically at a certain point the
percentage of compute that will be
biological is very small and anyway some
of these Technologies like and I'm a
technologist and I've gu some
responsibility for the creation of
artificial intelligence at least you
know a little bit and I think we just
want to make sure that we're guiding
things to a
technological you know a positive future
and and reduce the probability of a
negative
one we definitely live in the most
interesting times and actually for a
while I was kind of depressed about AI
but then I I kind of got fatalistic
about it and said like well even if even
if AI was going to you know end all all
Humanity would I prefer to be around to
see it or not I I guess I would prefer
to be around to see it just out of
curiosity but I obviously hopefully AI
is extremely beneficial to humanity but
but the thing that sort of reconciled me
to be less anxious about it was to say
well I guess even if it was apocalyptic
I'd still be curious to see the it's
like you know I be be curious to see
it I mean it's it's sort of a funny
thing like if you assume like a best
case AI scenario imagine if if if you're
the AI and you're trying to you just
want the human to tell you what it wants
just please spit it out but it's
speaking so slowly like a tree okay like
trees communicate okay they if you watch
a tree like a you know sped up version
of a tree growing it's actually
communicating it's communicating with
the soil it's trying to find the
sunlight you know it's reacting to other
trees and that kind of think very slowly
but from a tree standpoint it's you know
not that slow so so what I'm saying is
we don't want to be a tree that's that's
the idea behind a high bandwidth neural
interface is just even when the AI
desperately wants to do good good things
for us that we can actually communicate
several orders of magnitude faster than
we currently
can digital super
intelligence that might be the most
significant technology that Humanity
ever creates um and and it has the
potential to be more dangerous than um
go up
so
um you know in the case of pting opening
eye it was to have there not be a
unipolar world where um Google with its
subsidary heat mine uh you know would
control an overwhelming amount of AI
talent and compute and and resources um
which then is somewhat dependent on um
basically how how Larry paig um and
serge R um and things should go CU they
they between three of them or two out of
three have control over alphabet CU
they've got super voting rights and um
you know I was quite con based on some
conversations I had with lar Pig uh
where um you know you did call me a
species for being pro
humanity and um so I'm like what side
are you
on I think generally it would be a good
idea to have some kind of AI regulatory
agency and you you start off with uh a
team that gathers insight to get a
maximum understanding then you have some
proposed Ru making and then eventually
you have regulations that are put in
place and this is something we have for
everything that is a potential danger to
the public so if it is you know Food
Food Administration we got aircraft with
the FAA and Rockets you know there's
every anything that is a danger to the
public over time we have learned as
often the hard way after many people to
have a regulatory agency to protect
Public Safety I'm not someone who thinks
that regulation is on Panacea where it's
only good of course there are some
downsides to regulation and that things
move bit slower and sometimes you get
regulatory capture and that kind of
thing but but on balance I think the
public would not want to get rid of most
Regulatory Agencies and you can think of
it also as like the regulatory agency
being like a referee you know what
sports game doesn't have a referee you
need someone to make sure that the that
people are playing fairly not not
breaking the rules and and that's why
basically every sport has a referee of
one kind or another so that's the
rationale for AI safety and I've been
pushing this all around the world and
and when I was in China a few months ago
meeting with some of the senior
leadership but my primary topic was uh
AI safety and Regulation and I they they
after we had a long discussion agreed
that there's Merit to AI regulation and
immediately took action in this regard
so so so sometimes we'll get this
comment of like well if the West does AI
regulation surely then what about what
if China doesn't and then leaps ahead
and I think they they're also taking it
very seriously because you know the
opposite of whatever a moral constraints
you programmed and we are currently you
know circling the Event Horizon of the
black hole that is digital super
intelligence The Event Horizon I mean
probably not explicitly but there's a
strong danger of of an implicit
extinctionist philosophy being
programmed into AI as WE peer into the
future of AI the pace of its advancement
leaves us Spellbound surpassing all
expectations and defying the bounds of
human
imagination well I mean one way that AI
could go wrong is if the extinctionist
philosophy is programmed into the AI whe
whether implicitly or
we're going to go in depth into
artificial intelligence which is
potentially the biggest civilizational
threat the integration of AI into social
media platforms has ushered in an era of
censorship silencing voices deemed
unsafe and eroding the foundations of
free speech the the the rate of which AI
is growing is it really boggles the mind
um it currently seems as though the
amount of compute dedicated to
artificial intelligence is um increasing
by a factor of 10 roughly every 6 months
um it's it's faster than annual that's
for sure so I recently heard today about
a gwatt class
AI uh compute
cluster and this is despite you know
showing repeated uh analyses of the
system including third party analysis of
the system which actually showed that uh
the number of uh views of painful
content uh
declined so you know the third parties
have all the data analy and that
actually does less save speech
contemplating the trajectory of digital
super intelligence feels akin to staring
into a vast Abyss where the unknown
looms large and the consequences remain
shrouded in
uncertainty you know we'll have the sort
of AGI Singularity you know sometimes
digital super intelligence is called
like a singularity like black hole
because just like with a black hole it's
difficult to predict what happens after
you pass the event rizm of black
hole it's it's really staggering and and
for sure so I'm just trying to give a
sense of scale it's I've never seen
anything move this fast any of any
technology this is the fastest moving
thing in terms of aiming for AI safety
my my best guess of my sort of primitive
biological neural is is that we should
aim for maximum truth seeking and and
curiosity that that's that's that's my
gutfield for this for how to make AI as
safe as possible amidst this Whirlwind
of technological progress the Quest for
AI safety becomes Paramount calling for
an unwavering commitment to truth
seeking and curiosity driven
exploration the issue I think with the
is not a question of hate speech it's
not a question obviously uh it's that
the ad and a lot of other organizations
have become activist
organizations um which are acting far
beyond their uh stated mandate or their
original mandate and and I think far
beyond what donors to those
organizations think they are
doing New link is is necessarily moved
slower than AI because when whenever you
put a device in a human you have to be
incredibly careful so I I think it's not
clear to me that the neur link will be
ready before AGI I think AGI is probably
going to happen first
organizations like the ADL have morphed
into activist entities straying far from
their intended purpose and wielding
influence beyond their mandate we're at
a very interesting juncture in the world
from a technology standpoint if you say
there's so many things happening if you
were to plot the the various types of
Technology on a chart you know the
modern era and I'd say even just like
really the last 20 years certainly the
last 100 years from the drawn of human
civilization the growth of Technology
just looks like a wall it's a technolog
is improving at sort of a
hyperexponential rate and we obviously
want to make sure that the technology is
something that benefits humanity and to
the greatest extent
possible you know and and what would
that look like what would that look like
well like there's this guy on the front
page of New York Times um about a year
ago um he's head up the extinctionist
society and he was literally quoted as
there are 8 billion people on on Earth
it would be better if there were none um
oh my God and yeah um so and if if you
take the extreme environmentalist
argument especially like the implicit
extreme environmentalist argument they
they there's an imp implicit conclusion
that humans are a plague on the surface
of the Earth so we I think we have to be
quite careful about um an an implicit
like like if the extinctionist movement
was somehow programmed into AI as as the
optimization that be extremely dangerous
while AI hurdles forward at break neck
speed Endeavors like neuralink proceed
with caution mindful of the complexities
and ethical considerations inherent in
merging technology with the human body
so I try not to be sort of a whatever a
scare Monger or something but when
you're talking about having something
that is an intelligence far in excess of
the smartest human on earth you have to
say at that point Who's in charge is it
the computers or the humans and you know
there there's some interesting ratios
that I think are are quite profound like
one of them being the ratio of digital
to biological compute so you take Al the
all the human brains and all the the
computer circuits and you say what's
that ratio the ratio of digital to
biological computer is increasing
dramatically every year because the
population of Earth is fairly static but
the output of silicon is dramatically
increased so basically at a certain
point the percentage of compute that
will be biological is very small
and anyway some of these Technologies
like and I'm a technologist and I be
some responsibility for the creation of
artificial intelligence at least you
know a little bit and I think we just
want to make sure that we're guiding
things to a
technological you know a positive future
and and reduce the probability of a
negative one the exponential growth of
Technology demands our vigilance
ensuring that its benefits align with
the greater good of humanity we
definitely live in the most interesting
times and actually for a while I was
kind of depressed about AI but then I I
kind of got fatalistic about it and said
like well even if even if AI was going
to you know end all all Humanity would I
prefer to be around to see it or not I I
guess I would prefer to be around to see
it just out of curiosity but I obviously
hopefully AI is extremely beneficial to
humanity but but the thing that sort of
reconciled me to be less anxious about
it was to say well I guess even if it
was apocalyptic I'd still be curious to
see the it's like you know I be curious
to see
it I mean it's it's sort of a funny
thing like if you assume like a best
case a scenario imagine if if if you're
the AI and you're trying to you you just
want the human to tell you what it wants
just please spit it out but it's
speaking so slowly like a tree okay like
trees communicate okay they if you watch
a tree like a you know sped up version
of a tree growing it's actually
communicating it's communicating with
soil it's trying to find the sunlight
you know it's reacting to other trees
and that kind of thing very slowly but
from a tree standpoint it's you know not
that slow so so what I'm saying is we
don't want to be a tree that's that's
the idea behind a high band with neural
interface is just in even when the AI
desperately wants to do good good things
for us that we can actually communicate
several orders of magude faster than we
currently
[Music]
Canal super
intelligence I might be the most most
significant technology that Humanity
ever creates um and it has the potential
to be more dangerous than um weapons
so
um you know the case of open the ey
there was to have they not be a unipolar
world where um Google with its subsidary
deep mind uh you know would control an
overwhelming amount of AI talent and
hudes and and resources um which then is
somewhat dependent on basically how how
Larry Pig uh and
Sergey um and er believe things should
go they they between three of them or
two out of three have control over
alphabet CU they've got super voting
rights and um you know I was quite
conserv to some conversations I had with
L paig uh where um you know you did call
me a species
being pro
humanity and
um so I'm like what side are you on L we
tread a precarious path wary of the
potential for AI to embody existential
threats such as the extinction movement
underscoring the need for conscientious
oversight and regulation I think
generally it would be a good idea to
have some kind of AI regulatory agency
and you start off with uh a team that
gathers insight to uh get maximum
understanding you have some proposed rul
making and then eventually you have
regulations that are put in place and
this is something we have for everything
that is potential danger to the public
so if it is you know we Administration
we got aircraft with the FAA and Rockets
you know there every anything that is a
danger to the public over time we have
learned as often the hard way after many
people to have a regulatory agency to
protect Public Safety I'm not someone
who thinks that regulation is s Panacea
where it's only good of course there are
some downsides to regulation and that
things move a bit slower and sometimes
you get regulatory capture and that kind
of thing but but on balance I think the
public would not want to get rid of most
Regulatory Agencies and you can think of
it also as like the regulatory agency
being like a referee you know what
sports game doesn't have a referee you
need someone to make sure that the that
people are playing fairly not not
breaking the rules and and that's why
basically every sport has a for of one
kind or another so that's the rationale
for AI safety and I've been pushing this
all around the world and and when I was
in China a few months ago meeting with
some of the senior leadership but my
primary topic was uh AI safety and
Regulation and I they they after we had
a long discussion agreed that there's
Merit to AI regulation and immediately
took action in this regard so so so some
we'll get this comment of like well if
the West does AI regulation surely then
what about what if China doesn't and
then leaps ahead and I think they
they're also taking it very seriously
because you know the opposite of
whatever a moral constraints you
programmed into the
system so this is a staggering amount of
compute um and and there are many such
such things that that's just the biggest
one I've heard of so far but there are
there's a 500 megawatt installation
happening there and there's there's
there's multiple 100 100 megawatt
installations um in the works I I it's
don't even clear to me what what you do
with that much
um compute um cuz when you when you
actually add up all human data ever
created you really just run out of
things to train on very quite quickly um
like you you know if you've got maybe I
don't know 20 or 30,000 h100s you can
train on synthetic data almost yeah yeah
you basically you have to have have
synthetic data because po simp well
under 100,000 h100s you can train on all
human data ever created including video
as AI outpaces is human advancement the
growing Chasm between technological
progress and societal Evolution raises
concerns about the balance of power in
an AI dominated
world so I've actually met with a number
of world leaders and to talk about AI
risk because I think for a lot of people
I don't unless you're really immersed in
the technology you don't know just what
how significant the risk can be I think
the reward is also very positive so I
don't want to be you know I'm not I I
tend to view the future as a series of
pro of probabilities there certain
probability that something will go you
know wrong some probability it'll go
right it's kind of a spectrum of of
things and to the degree that there is
Free Will versus determinism then we
want to try to exercise that free will
to ensure a great future so you know and
and the the single biggest rebuttal that
I've gotten among leaders in the world
West with regard to AI is that well sure
the West might regulate AI but what
about China because your point about
which countries will have significant
leadership in AI China is certainly one
of them one of the very top you know
potentially number
one so you've got your olymic system
your sort of basic drives your cortex
which is the thinking and planning and
then you have tertiary layer which is
your computers your devices your phones
laptops all the servers that exist the
applications and in fact I think
probably a lot of people have found that
if you leave your cell phone behind away
panicky yeah if you forget your cell
phone it's like missing limb syndrome
you know you like where'd that thing go
losing your cell phone is like missing
lens so because it is your cell phone is
EXT extension of yourself the limitation
is bandwidth so you the rate at which
you can input or I should say output
information into your phone or computer
is very slow so with a phone it's really
just the the speed of your thumb
movements and with you know best case
scenario you're a speed typist on a
keyboard uh but even that data rate is
very slow we're talking about tens maybe
hundreds of bits per second whereas a
computer can communicate in trillions of
bits per second so so so the and this is
admittedly somewhat of a you know Hail
Mary sh or
whatever is that if you can improve the
bandwidth between uh your cortex and the
your digital tertiary self then you can
achieve better cohesion between what
humans want and what AI does at least
that's one Theory I'm not saying this is
a show thing it's just one potential
iron in the fire if if ultimately you
know hundreds of millions of billions of
people get a high bandwidth interface to
their digital tery self their AI self
effectively then that that seems like
that probably leads to a better future
for Humanity Elon musk's apprehension
about the future of AI is matched only
by his curiosity as he grapples with the
Dual possibilities of Salvation and
Annihilation that AI presents for
Humanity the danger with programming
morality and explicit with an explicit
morality program is what is sometimes
referred to as the Waluigi problem if
create Luigi you automatically create
Waluigi by inverting Luigi so I think we
we have to be careful about programming
in an arbitrary morality but if if we
focus on maximizing truth with
acknowledged error that's that's
probably I think that's the the way to
maximize safety and and also to have the
AI be curious cuz I think that you know
Earth is much more interesting to an
advanced AI with humans on it than
without
humans
Посмотреть больше похожих видео
As artificial intelligence rapidly advances, experts debate level of threat to humanity
SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism
Elon Musk Predicts AGI, Self-driving, Unlimited Energy, Robots Coming SOON
Provably Safe AI – Steve Omohundro
Les dangers de l'intelligence artificielle : entrevue avec Yoshua Bengio
How ChatGPT Changed Society Forever
5.0 / 5 (0 votes)