SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism
Summary
TLDRThe transcript discusses the firing of OpenAI researchers疑似 due to leaked information related to AI safety and reasoning. It delves into the concept of effective altruism (EA), questioning its secretive nature and potential links to a global governance movement. The video highlights concerns about the influence of EA in AI research and the push for regulations that could lead to widespread surveillance and control. It contrasts this with the views of those who advocate for technology and AI advancement, sparking a debate on the balance between safety and progress in AI development.
Takeaways
- 🔍 The video discusses a controversy involving the firing of researchers at OpenAI, allegedly linked to information leaks about an unspecified project named 'QAR'.
- 🌐 It explores the concept of Effective Altruism (EA), originally founded to use evidence and reason to maximize human well-being, but suggests it may have evolved into something more secretive and potentially manipulative.
- 📉 The script touches on the connections between EA and high-profile tech figures and companies, including references to Elon Musk and the FTX scandal involving Sam Bankman-Fried.
- 🔥 It raises concerns about the potential for a shadowy, global governing body as envisioned by EA proponents, capable of overriding national sovereignties to address perceived existential risks.
- 🔬 The narrative questions the transparency and true intentions behind EA, contrasting public mission statements with secretive or potentially harmful actions.
- 💾 Discusses the regulatory impact on technology, specifically AI, suggesting that stringent regulations might hinder technological progress and innovation.
- ⚖️ There's a detailed critique of proposed AI safety measures which include banning high-capacity GPUs and extensive surveillance of software development.
- 🚨 Highlights the significant influence and financial movements within the EA community, linking large donations and their use in controversial or opaque ways.
- 🌍 Calls attention to the broader implications of AI governance, warning that excessive control could lead to a dystopian oversight of technological advancements.
- 🤖 Expresses a balanced view on technology's potential, advocating for cautious yet progressive development to avoid both stagnation and unchecked risks.
Q & A
What was the primary reason behind the firing of Sam Altman from OpenAI?
-Sam Altman was fired during a controversy in November, which involved leaks from OpenAI. The script suggests there were internal conflicts and potential misuse of information, but specific details about the cause of his firing were not explicitly stated.
What are the core principles of Effective Altruism as described in the script?
-Effective Altruism (EA) is described as an approach that uses evidence and reason to determine the most effective ways to benefit others and then takes action based on those findings. It began with the mission of figuring out how to assist humanity optimally using a rational and scientific method.
What controversy is associated with Effective Altruism according to the script?
-The script mentions that Effective Altruism has been linked to secretive operations and potentially having different underlying agendas than stated, as evidenced by its involvement in the OpenAI controversies and connections with individuals like Sam Bankman-Fried, who faced legal issues related to financial fraud.
What concerns are raised about AI safety and global governance in the script?
-The script raises concerns about proposals from figures within the Effective Altruism community advocating for a global government to manage existential risks, including AI. This includes potential overreach such as making certain technologies illegal and imposing pervasive surveillance to control AI development.
How did the Future of Life Institute reportedly use funds received from Vitalik Buterin according to the script?
-The Future of Life Institute used funds from Vitalik Buterin, which came from liquidating Shiba Inu cryptocurrency tokens, to create the Vitalik Buterin Fellowship in AI Existential Safety. This was part of their broader goal to promote AI safety.
What legal implications are mentioned in the script regarding the development and regulation of AI?
-The script discusses new regulatory frameworks that could grant significant power to administrators, including making certain hardware illegal, conducting raids, compelling testimony, and potentially shutting down sectors of the AI industry temporarily.
What are the stated goals of the Future of Life Institute as described in the script?
-The Future of Life Institute aims to mitigate existential risks through regulatory and policy interventions. They focus on creating mechanisms and institutions that can govern AI development globally to ensure safety and prevent misuse.
What skepticism does the character Larry David represent in the script's narrative on technological optimism?
-Larry David's character in the script symbolizes skepticism towards major technological advancements and investments, specifically highlighting the potential risks and downsides that often accompany new innovations, as illustrated by his dismissal of FTX in a commercial.
According to the script, how does the author view the duality of technology's potential for both benefit and harm?
-The author of the script acknowledges that while technology, including AI, offers tremendous potential benefits like enhanced drug discovery and renewable energy, it also poses significant risks if not managed properly, highlighting the need for balanced and cautious advancement.
What is the significance of the debate between 'accelerationists' and 'anti-technology' perspectives as discussed in the script?
-The script contrasts 'accelerationists', who believe in advancing technology rapidly to achieve a utopian future, with 'anti-technology' advocates, who argue for slowing down technological progress due to safety concerns. This debate is central to discussions on how society should handle emerging technologies like AI.
Outlines
🔍 Investigating Leaks and Effective Altruism Controversies
The paragraph starts by discussing the firing of certain OpenAI researchers, including Leopold Ashenbrenner and Pavl Ismo, for leaking confidential information. The scenario ties back to a previous incident involving the dismissal of Sam Altman from OpenAI and alludes to mysterious leaks related to something called 'QAR'. The discussion then shifts to effective altruism (EA), introducing it as a movement started by Peter Singer and others, aiming to use evidence and reason to maximize benefits to others. However, the narrative suggests that the movement may have secretive and possibly sinister aspects, particularly in relation to AI safety and hidden agendas, emphasizing a lack of transparency in their operations.
🎭 Turmoil and Secrecy Within AI Leadership
This paragraph highlights the turmoil within AI organizations, particularly focusing on the secretive nature of the board members of OpenAI during a significant crisis involving the firing and later rehiring of Sam Altman. It details the uncommunicative and obscure handling of the crisis by board members like Ilia Sutskever and Helen Toner, who were possibly influenced by their affiliations with the effective altruism movement. Additionally, it discusses the continuous secretive stance of the organization even after external inquiries, suggesting a possible deeper agenda or conflict within the AI community.
🌐 Ideological Divides and Global AI Governance
The third paragraph examines various influential figures and their connection to the effective altruism movement and their controversial views on AI and global governance. It mentions Vitalik Buterin and Max Tegmark, highlighting significant donations to AI safety and existential risk initiatives. The text critically assesses the push for regulations that may lead to a global surveillance state, hinting at the motivations behind these moves as potentially controlling and authoritarian, rather than purely altruistic or safety-driven.
🤖 Skepticism and Criticism of AI Safety Narratives
This paragraph discusses the marketing of AI safety and effective acceleration as protective measures against existential risks, yet it hints at underlying motives of control and power within these narratives. It includes a satirical take on a Super Bowl commercial by FTX featuring Larry David, which inadvertently underscores the need for skepticism towards too-good-to-be-true offers. The narrative questions the integrity of the effective altruism movement and its leaders, suggesting that their proposed solutions might actually cloak ambitions of global dominance under the guise of humanitarian aid.
🌍 Navigating the Future of AI and Humanity
The final paragraph focuses on Vitalik Buterin's nuanced perspective on technology and AI, introducing the concept of techno-optimism, which advocates for technological progress as a force for good. It discusses different ideological views regarding the future of AI—ranging from dystopian fears to utopian hopes—and emphasizes the importance of cautious yet forward-thinking approaches to AI development. The discussion underscores the complex and polarized debates surrounding AI, urging a balanced understanding and careful consideration of how AI policies are shaped and implemented.
Mindmap
Keywords
💡Effective Altruism
💡AI Safety
💡Global Governance
💡Extinction Risk
💡Techno-Optimism
💡Regulatory Overreach
💡AI Doomer
💡Surveillance
💡Crypto Fraud
💡OpenAI
Highlights
OpenAI researchers were fired for leaking information, including Ally of Ilia and Leopold Ashen brener.
The leaks were related to AI safety and reasoning, and the researchers seem to have ties to the effective altruism movement.
Effective Altruism (EA) is a movement that started in 2011 with the aim of using evidence and reason to benefit others as much as possible.
EA has been supported by tech figures like Sam Bankman-Fried and Elon Musk, but has also been criticized for pushing a dangerous brand of AI safety.
The OpenAI board, including Adam D'Angelo and Ilia Suk, was fired in November 2023, and Sam Altman returned to run OpenAI.
Helen Toner, who was part of the effective altruism community, was behind a paper criticizing OpenAI's handling of information releases.
Asen Brener, who was fired from OpenAI, had previously worked at the Future Fund, which was started by Sam Bankman-Fried.
Vitalik Buterin, the creator of Ethereum, has shown support for EA and has donated significant amounts to the Future of Life Institute.
The Future of Life Institute has been involved in AI safety discussions and has proposed regulations for AI systems.
There are concerns about the potential for EA to create a unified global government with absolute power to enforce its views on existential risks.
The transcript discusses the potential dangers of AGI (Artificial General Intelligence) and the different views on how to approach its development.
The speaker expresses skepticism about the intentions of those who advocate for global governance and surveillance in the name of AI safety.
The concept of 'effective acceleration' is introduced as a nuanced approach to balancing technological advancement with caution and defense.
Vitalik Buterin's blog post outlines his views on the potential dangers and benefits of AI, placing himself in a category of cautious optimism.
The transcript highlights the importance of understanding different perspectives on AI development and the potential impact on society.
The speaker calls for individuals to form their own opinions on AI policies and regulations, rather than having decisions made for them.
The transcript concludes with a call to action for viewers to consider their stance on AI safety versus tech optimism and the implications for the future.
Transcripts
I don't even know where to begin but I
guess let's start here openi researchers
including Ally of Ilia sover fired for
leaking information out of open AI if
you recall that whole November Fiasco
with the firing of Sam Altman the qar
leaks which have been confirmed to be
true by the way we still don't know what
qar is but the leaks were real well
apparently some of the researchers
behind some of the leaks we don't know
specifically which ones they have been
found and fired Leopold Ashen brener and
pavl ismo so we're not sure what they
leaked but it seems like they were
working on AI safety pava was also
working on reasoning as well as AI
safety do you think there's a chance
that these people have links to some
shadowy organization that is really
against AI so the information posted
their pictures here Leopold and Powell
and of course it seems like they have
ties to the effective altruism movement
all right but to really understand
what's Happening Here we have to talk
about effective altruism EA as it's
sometimes referred to what is effective
altruism so couple quick disclaimers
first of all I don't know eff of
altruism as well as I should so I am
relying on some of this information that
I find on the internet some of it may be
inaccurate if I'm off about something
I'll try to post Corrections in the
comments or do a follow-up video but
also at the same time I think it's it's
very difficult to understand exactly
what this thing is because I think while
maybe it started as one thing maybe even
an altruistic thing what it kind of
morphed into I think is very different
and as far as I can tell all of them are
very secretive about what they do what
their goals are it's really difficult to
figure out what it is that they actually
want not their stay mission of quote
unquote help Humanity but their actual
mission that they're trying to do what's
the thing that they're trying to
accomplish so it started in 2011 Peter
Singer Toby or remember Toby or and
William mcll and sort of their stated
mission was using evidence and reason to
figure out how to benefit others as much
as possible and taking action on that
basis so basically they wanted to think
about how to help Humanity in the best
way possible kind of Take the Long of
view and kind of go about it in a
reasonable kind of scientific method
that's as best as I understand it which
explain like that I would say hey yes
this is a good group and I kind of share
those beliefs as well we should try to
help everyone and focus on the long term
and think about how to do so with
evidence and reason again the stated
mission is not the problem here in fact
here's William mcll a moral philosopher
at Oxford right so he has a book what we
owe the future here's Elon Musk saying
worth reading this is a close match to
My Philosophy so Elon Musk a number of
years ago said hey this this sounds like
a good idea right which again on the
surface sounds like a good idea helping
Humanity going about it in an
intelligent way thinking in longterm
versus short-term right here's Stephen
Mark Ryan saying should be a good read
will did a super fascinating podcast
with Tim Ferris close to a decade ago
really got me thinking I just realized I
remember when I was when Tim Ferris
published his first ever podcast I think
he was very nervous about doing a
podcast so he really hit the wine very
very hard during that podcast yeah so it
kind of went off the rails towards the
end there but yeah it was close to a
decade ago actually now it has been a
decade and I feel very very old but the
point I'm trying to make is it's
important to understand that there's
what we say we want and then there's
what we actually do right so I'm sure we
all have a Spam box full of various
emails promising us wonderful things
that at face value yeah maybe we do want
they promise Fortune Fame and at ation
so the headline is good but the final
result is you having to dispute various
credit card charges because you've been
defrauded effective altruism started
with a good headline Let's Help the
world as much as possible how did it end
well it started with Sam bankman freed
the founder of FTX I haven't followed
that too closely but it sounds like he
defrauded the various crypto investors
sounds like they're missing billions of
dollars this is an article from wired
effective altruism is pushing a
dangerous brand of AI safety this
philosophy supported by Tech figures
like Sam bankman freed fuels the AI
research agenda creating a harmful
system in the name of saving Humanity so
Sam Beckman freed is in jail or I guess
federal prison technically he's not
having a good time there his lawyers are
arguing that he should have a reduced
sentence because he's uniquely
vulnerable to the dangers of prison
being autistic he has a hard time
picking up on certain social cues that
are you know very important to survival
in a place like that which by the way
I'm sure is 100% true I do not doubt
that claim however the lawyers are
asking for his sentence to be reduced to
5 years and I really doubt that that's
going to work so this was the opening ey
board in November 2023 when that whole
Fiasco happens so we have Adam D'Angelo
so still on the board as of right now
founder and currently running quora so
if you've been hearing a little bit more
about po their little chatbot that um I
believe is running anthropics technology
now I think they've used both open Ai
and anthropics Claud to run po if I
recall correctly but he's still there
then we have Ilia Suk who has been
strangely silent since the whole thing
we don't really know where he is then
Tasha mcau and Helen toner so we think
Helen toner is the one behind a lot of
this there was a paper that she wrote
criticizing openi how openi handled some
of the releases that might have created
a clash with Sam Alman and that's the
thing that kind of led to this whole
thing and Helen toner is part of the
effective altruism Community during the
whole open AI board cush they refused to
talk about what was happening even
though they were getting calls from the
attorney general in fact the same
attorney general that I think put Sam
bankman freed away called them they had
a two-hour long conversation again this
is based on some of the leaks that we
were seeing from open AI they refused to
expand on what was happening they were
still very secretive they didn't want to
get any information out there and
eventually that board was kicked out Sam
mman came back to run open Ai and to
this day I don't think they ever
explained what they were doing for what
reason they put out a statement that had
some words strung together but didn't
have any actual data they didn't have an
explanation of is just like we regret
the occurrence of the blah blah blah but
it didn't say anything right I think
this is the statement they put out
they're saying opening admissions to
ensure artificial general intelligence
benefits everyone and the board has to
prioritize this Mission accountability
is important it's even more important
for AGI we hope this happens as we told
the investigators deception manipulation
and resistance to thorough oversight
should be unacceptable and yet they
themselves don't seem to be very open in
what their con concerns were what
actually happened so for all their talk
of accountability they're not really
accounting for their own actions it
seems to me like based on what I've seen
I just haven't found anywhere where they
talk about what their motivations are
here's Toby or one of his books he's
again one of the co-founders of this
movement effective acceleration so this
is posted by David Z Morris he's saying
or is an unabashed advocate for unified
Global government who decides what's an
Extinction risk and who the hell decides
exactly how much is necessary Extinction
risk and this is uh from Toby ord's book
another promising Avenue for incremental
change is to explicitly prohibit and
punish the deliberate or Reckless
imposition of unnecessary Extinction
risk international law is the natural
place for this as those who impose such
risk maybe well National governments or
heads of state who could be effectively
immune to Mere national law so it seems
like what these people want to create is
a unified Global government that is able
to punish democratically elected heads
of state of governments if they perceive
what they're doing to be an Extinction
risk whatever that means like how do you
define what's an Extinction risk who
gets to decide right I mean this seems
to me like it would give them Absolute
Power to jail anyone heads of state
people running the country hopefully
elected democratically to just put them
away to remove them from the post or put
them in jail with no explanation other
than you pose an Extinction risk so
going back to Ashen brener and ISM of
Ashen brener graduated from Columbia
University and has previously worked at
the future fund a fund started by the
former FTX Chief Sam bankman freed again
that's the guy that's in jail and has
his team of lawyers actively trying to
reduce that sentence but that fund was
aimed to finance project to improve
Humanity's long-term prospects then year
ago Asen brener joined open AI right and
several of the board members who fired
Alman had also ties to effective
altruism Tasha makali is a board member
of effective Ventures parent
organization of the Center for Effective
altruism and Helen toner previously
worked at the effective altruism focused
open philanthropy project and of course
both left the board when Alman returned
as CEO so this is vitalic butterin so he
is the guy behind ethereum ethereum has
for most of the time been the number two
biggest and most successful
cryptocurrency after Bitcoin I don't
track that stuff too closely nowadays
but it used to be I think it's fair to
say that most of the time it was number
two it probably is now yeah I figured I
checked so yes it's number two and this
is Max tigar future of Life Institute so
another person that seemingly sort of
associated with EA CU future of life and
EA seem linked so in May 2021 metallic
butterin Burns 80% of his ship holding
and uses the remaining 24 long-term
charitable causes so shibba Inu is one
of those crazy doggy cryptocurrency it
doesn't really matter the point is he
gives a lot of money to the future of
Life Institute we're talking to the tune
of
755 million so not quite a billion but
uh still quite a bit future of Life
Institute uses FTX so Sam bankman freed
his company that defrauded investors out
of billions it liquidates the ship
tokens so it sells it basically converts
it into dollars I assume and they use
that money to create the valal butterin
fellowship and AI existential safety
everyone Pats themselves on the back the
future of Life butterin shibba enu
Community sang B freed Alam research
right here's Max tear vitalic then
November 2022 whoops the collapse of bet
sang bankman freed FTX and Alam research
due to fraud allegations boy they got so
lucky that they cashed out sounds like
they were able to get their money out in
time the future of Life Institute uh
posts for the EU in the transparency
register listing musk Foundation as the
top contributor of you know 3 million it
looks like but where's the nearly a
billion dollars from shibba Ino well it
lacks that amount the amount they
liquidated from the 2021 Shiba Inu
donations since the audit is still in
progress and the yearly budget presents
musk Foundation ation donation as The
prominent one right so the 3 and5
million from musk is listed as the top
one not the close to a billion dollars
from shibba Ino then of course we have
that PA AI experiments the open letter
right everyone points to musk as the
person that funded the foundation right
then the update in 2023 the donation
listing the 600,000 minor cryptocurrency
I guess it went down in value since the
donation in 2023 future Life Institute
participates in the UK AI safety Summit
te Mark addresses the US Congress and
the EU AI act this past they've pushed
it through allowing for the regulation
of general purpose AI systems here's an
interview in one of the future of Life
institutes co-founders talking about how
they view protecting the world from AI
basically by making the hardware illegal
and subjecting you know software the
code that people write to pervasive
surveillance on a global scale take a
listen I do think that governments
certainly governments can make things
illegal well you can make Hardware
illegal you can also say that yeah
producing graphics cards above certain
capability level is now illegal and
suddenly you have like much much more
Runway as a civilization do you get into
a territory of having to put
surveillance on what code is running in
a data center yeah I mean regulating
software is much much more harder than
Hardware if you like let the mor law to
continue then like the surveillance has
to be more and more pervasive so my
focus for the foreseeable future will be
on kind of regulatory interventions I'm
kind of like trying to educate lawmakers
and kind of helping and perhaps hiring
lobbyists to try to make the world safer
now the future of Life Institute has a
new grant program for Global governance
mechanism and institutions he wants to
ban the creation agis and have various
surveillance mechanisms and this year
future of Life Institute tells Politico
that its efforts support Common Sense
regulations but what they're talking
about is Banning gpus these Nvidia cards
above a certain capacity those should be
made illegal what software people rights
should be surveilled and if you also add
to that fact the idea that or one of the
co-founders of effective altruism is
talking about having some sort of a
global government that's above heads of
state above government that's able to
jail people for you know creating
existential risks which again is very
vague They Don't Really Define what an
existential risk is they don't really
talk through why they think AI might
kill everyone but it seems that they're
just pushing for regulation for having
political power Global political power
my spam boox is full of very attractive
sounding headlines but in reality what
they want is to rip me off and take my
money same Sam bankman freed and the FTX
thing they wanted to help everyone get
wealthy and help the world but ended up
just ripping everybody off and losing
billions of dollars of investor funds
now these people are saying that they
want to save us from certain Doom
certain Extinction from AGI effective
ACC ation wants to help Humanity right
that's the headline what is the actual
thing that's going to happen there is
this funny commercial that was made by
FTX for the Super Bowl it was funny then
it's hilarious now because it featured
Larry David and Larry David was a
skeptical character he dismissed major
innovations that happened throughout
history like the wheel the fork the
toilet and now he's dismissing the
cryptocurrency exchange FTX the whole
point of the commercial is Don't Be Like
Larry invest with FTX what's funny here
is we should be like Larry and I don't
mean the real person Larry David who
himself sounds like lost a whole bunch
of money on crypto cuz his salary was in
crypto they paid him in crypto can you
believe that I'm talking about be like
Larry this mythical person that can
smell the BS when he sees it your email
spam box full of women that want to meet
you is a good headline but it's fraud
they just want to get your money
companies like FTX that say that they
want to make you rich it's fraud they
just want to take your money the people
that are saying that they want to
protect you from extinction by this
scary software say it with me it's fraud
do you want to install a global
governance mechanism ban and jail anyone
that disagrees with them probably
because they believe that they can
install themselves at the very top and
become the absolute Kings of the world I
hate to break it to you but these aren't
the good guys now I have to say here so
in regards to vitalic butterin I was
kind of surprised that he was caught up
in this he didn't strike me as one of
those people and maybe this is me being
naive maybe this is me being a little
bit too trusting but to me the jury's
still out on this guy and he posted this
image which I thought was excellent I
try to do my best to not go all in on
any specific view I like to be a little
bit more neutral I I have my biases I
have my opinions I have my preferences
but I think now more than ever it's
important to try to understand the
different opinions the different sides
you can have your preference but at
least understand where the other side is
coming from one view is the anti-
technology view it's this idea that
safety is behind and dystopia ahead and
there's quite a number of people that
kind share this view certainly the
people that we've talked about today
seem to see it in this fashion or at
least they say say they do right
dystopia ahead right AI will kill
everyone AI will turn us all into paper
clips some people are saying it won't
just destroy humanity and Earth but our
entire universe will take over and turn
into paper clips or whatever other
scenario they Envision and that safety
is is behind us we have to kind of stop
technological progress decelerate learn
to live with less right less food less
Comfort less air conditioning and and
move backwards into time a lot of these
beliefs kind of also overlap with this
idea of depopulation right this is one
thing that Elon Musk kind of rails
against he's saying no we need more
people more kids we need the Next
Generation and he's kind of fighting
against the forces they saying no we
need less people we need to reduce
Earth's population and by the way if
you're not following some of this these
are real conversations that that some
people are having including people that
wield a lot of political power a lot of
influence a lot of capital but that's
the anti-technology view right Outlaw
gpus and have a global worldwide
surveillance on software because if we
keep going down this path there's Doom
ahead right and there's the
accelerationist view that there's
dangers behind and Utopia ahead so right
now we're seeing a lot of progress with
AI for example doing drug Discovery
there's more and more overlap between
genomics and AI so potentially we could
cure some heart to cure diseases we can
have people live longer we can have more
targeted drugs that that help people
heal without the side effects we're
potentially could be seeing our first
commercially viable fusion power plant
that will make energy very cheap people
are talking going to colonize other
planets kind of removing the risk of
potentially being just on a single
planet as a species or one unlucky
meteor can take everyone out potentially
right so these people view advancing
technology as the right way and slowing
down and letting the crippling
regulations and these World governments
ruled by people that maybe we don't
agree with on everything I think we can
say that maybe those are the dangers the
authoritarian governments worldwide
surveillance Etc and then we have the
third system and that is what vitalic
butterin is saying that's my view that
there's dangers behind and multiple
paths forward ahead some good some bad
and this at least I can kind of agree
with the path forward has wonderful
amazing promises it has some dangers
potentially but I'm going to be 100%
honest and I'll come out and say this
the people with this Viewpoint scare me
the most the people that want to install
a global authoritarian surveillance
regime that is bigger than governments
in order to protect us from something
vague that they can't even fully
describe that scares me because even if
they are sincere and they're good people
and they're super duper nice and they
want the best for people well the next
generation that takes over may not be
and eventually we're going to run into
somebody that's going to use it for
something bad and at that point it will
be too late to do anything about it but
back to italic my techno optimism this
blog post that he wrote it is Big it's
very very very it's huge it's pages and
pages and pages of notes and bullet
points and various uh charts and graphs
and whatever its table of contents is
like a page long his post also mentions
mark andreon as one of the faces behind
techno optimism the people that believe
that technology that AI will help the
world he is by the way one of the main
guys behind a16z and reent Horowitz they
wrote this techno Optimist Manifesto on
the a6z website and they believe that
advancing technology is one of the most
virtuous things that we can do they
believe in ambition aggression
persistence relentlessness strength they
believe in Merit and achievement they
believe in Pride confidence and
self-respect when earned they believe in
free thought free speech and free
inquiry they believe in the actual
scientific method and the enlightenment
values of free discourse and challenging
the authority of experts they believe in
as Richard fan says science is the
belief in the ignorance of experts and I
would rather have questions that can't
be answered than answers that can't be
questioned and they have enemies and I
quote we have enemies our enemies are
not bad people but rather bad ideas
those enemies Go by different names
existential risk degrowth their enemy is
stagnation corruption regulatory capture
their enemy is speech control and
thought control they're saying our enemy
is deceleration degrowth depopulation
the nihilistic wish so trendy among our
Elites for fewer people less energy and
more suffering and death so I might go
back and read the metallic post try to
understand where he's coming from but a
quick AI summary that I did makes it
seem that he is indeed somewhere in
between he is in fact somewhere here he
believes that there are specific dangers
ahead specific very good paths ahead and
of course this bear behind us means that
he believes that technology should
Advance he believes that AI should grow
with humans that we should be integrated
with AI it has some pretty Nuance takes
on these whole ideas of what EA is what
eak is so eak is of course effective
acceleration so in that uh Andre and
Horowitz a16z their patron saints of
techno optimism I mean the first person
on there and I think also the second is
one of the leaders of that effective
acceleration or eak movement so to me I
think vitalica is trying to be very
nuanced in a very polarized world I
think he's somebody that thinks pretty
deeply about this stuff but I just can't
see him as an anti-technology person we
believes that technology is amazing and
there are very high costs to delaying it
there's this interesting chart he posted
so it's kind of like the different
quadrants on the right you have agis
coming soon on the left not very soon
and down you have you know the risk of P
Doom so all feature value likely to be
destroyed by misaligned AGI if you're an
AI Doomer or not basically and towards
the top it's unlikely that AGI will
destroy everybody and uh they're saying
here this is not serious it's just
guesswork where everybody is but you can
see Sam Alman and he's saying AGI is
coming soonish and it's unlikely that
it's going to destroy everybody you have
the founder of Google up there like it's
highly unlikely that it's going to
destroy us right they're very very
positive about it of course at the very
bottom you have a owski probably the
most well-known AI Doomer you know Demi
s habis who is part of Google deep M who
they've placed into you know more of the
you know let's say he's cautious he's a
little bit towards yeah there could be
problems like we have to be careful Yan
Lon is very positive Andrew a very
positive that it's not going to destroy
us interestingly Gary Marcus is highly
on here but he tends to think that AI is
not going to be very effective and again
a lot of this is just guesswork it's not
serious in any way but looks like
metalic placed himself in the category
that egi is not coming anytime soon and
it's unlikely to destroy so he's not
concerned but he thinks that there's a
chance he's maybe a little bit concerned
he places his P Doom so the probability
of something horrible happening
existential risk as
0.1 he's saying you don't have to buy
the story but in my opinion it's worth
worrying about and he's saying his
philosophy is Dak d/ ACC and a podcast
on bankless he talks about Dak what it
stands for so the D is defensive or kind
of accelerating but defensively so
carefully but also kind of stands for
decentralization as in getting away from
one potentially authoritarian government
or some Central system pulling the
strings of everything and everybody so
I'll post a survey down below somewhere
that will allow you to vote to kind of
show where you are on this whole thing
do you think we should accelerate
technology as much as we can accelerate
AI because there's more danger in
slowing down than there is in
accelerating are you more in line with
the whole world government controlling
everything surveilling everything and
just giving them Absolute Power because
only they can protect us from death by
AI I mean I'm sure there are some people
that believe that or do you think that
maybe we do need to accelerate but
defensively cautiously maybe you're
somewhere in between let me know I'm
curious where people fall on that
spectrum because I think these questions
are going to be more and more relevant
as you can see there are well-funded
organizations that are trying to push
through these regulations they've
succeeded in the EU and they're trying
here in the US as well they want to
control all forms of software anything
they could use neural Nets they want to
control search engines or anything that
predicts the demand Supply price cost or
Transportation needs of products or
Services their powers are said to be
very open-ended so not a Ru making
process or a due process just give them
all the power and they will protect you
over and over the legislation has this
oneway ratchet Clause the administrator
has the freedom to make rules stricter
without any evidence but has to prove a
negative to relax any rules so easy to
gain more power but hard to give any of
it up no open- Source software if it
doesn't get a government okay it cannot
be continued if you buy sell gift
receive trade or transport even one
covered chip like an Nvidia card that is
covered under this this act well then
you have committed a crime and this
Frontier artificial intelligence system
administration can straight up compel
testimony and conduct raids for any
investigation or proceeding including
speculative proactive investigations
there's a massive criminal liability
section not just for people you know
doing the math and doing the AI but also
any officials who don't do their jobs
and here's the kicker emergency Powers
the administrator of this organization
that they plan to create can on his own
authority can shut down the frontier AI
industry for 6 months or I'm reading it
here as 60 days unless they are
confirmed by the president or Congress
and then that can extend it to one year
they can take full possession and
control of specified locations or
equipment related to Ai and the
administrator can conscript troops so
you can basically raise an army to fight
the nerds that are putting together
various AI software also of course all
other agency have to consult this agency
if they're doing any AI enforcement
stuff they amend the antitrust law to
give the administrator a near veto on AI
mergers and they can use whatever
funding they can get their hands on
including the fines imposed and
donations so wherever you are in the
world I think you should figure out
where you stand on these policies on AI
safety versus Tech optimism Who are the
the good guys who are the bad guys you
should decide otherwise the decision
will be made for you with that said my
name is Wes R and thank you for watching
تصفح المزيد من مقاطع الفيديو ذات الصلة
OpenAI is INVESTIGATED by Law Firm. The TRUTH about the Board's Firing of Sam Altman is Revealed...
AI NEWS OpenAI vs Helen Toner. Is 'AI safety' becoming an EA cult?
BREAKING: OpenAI's Going Closed (yes really)
"What's Coming Is WORSE Than A Crash" - Whitney Webb's Last WARNING
🚩OpenAI Safety Team "LOSES TRUST" in Sam Altman and gets disbanded. The "Treacherous Turn".
i'm EXPOSING this NO MATTER what.
5.0 / 5 (0 votes)