SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism

AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
14 Apr 202429:38

Summary

TLDRThe transcript discusses the firing of OpenAI researchers็–‘ไผผ due to leaked information related to AI safety and reasoning. It delves into the concept of effective altruism (EA), questioning its secretive nature and potential links to a global governance movement. The video highlights concerns about the influence of EA in AI research and the push for regulations that could lead to widespread surveillance and control. It contrasts this with the views of those who advocate for technology and AI advancement, sparking a debate on the balance between safety and progress in AI development.

Takeaways

  • ๐Ÿ” The video discusses a controversy involving the firing of researchers at OpenAI, allegedly linked to information leaks about an unspecified project named 'QAR'.
  • ๐ŸŒ It explores the concept of Effective Altruism (EA), originally founded to use evidence and reason to maximize human well-being, but suggests it may have evolved into something more secretive and potentially manipulative.
  • ๐Ÿ“‰ The script touches on the connections between EA and high-profile tech figures and companies, including references to Elon Musk and the FTX scandal involving Sam Bankman-Fried.
  • ๐Ÿ”ฅ It raises concerns about the potential for a shadowy, global governing body as envisioned by EA proponents, capable of overriding national sovereignties to address perceived existential risks.
  • ๐Ÿ”ฌ The narrative questions the transparency and true intentions behind EA, contrasting public mission statements with secretive or potentially harmful actions.
  • ๐Ÿ’พ Discusses the regulatory impact on technology, specifically AI, suggesting that stringent regulations might hinder technological progress and innovation.
  • โš–๏ธ There's a detailed critique of proposed AI safety measures which include banning high-capacity GPUs and extensive surveillance of software development.
  • ๐Ÿšจ Highlights the significant influence and financial movements within the EA community, linking large donations and their use in controversial or opaque ways.
  • ๐ŸŒ Calls attention to the broader implications of AI governance, warning that excessive control could lead to a dystopian oversight of technological advancements.
  • ๐Ÿค– Expresses a balanced view on technology's potential, advocating for cautious yet progressive development to avoid both stagnation and unchecked risks.

Q & A

  • What was the primary reason behind the firing of Sam Altman from OpenAI?

    -Sam Altman was fired during a controversy in November, which involved leaks from OpenAI. The script suggests there were internal conflicts and potential misuse of information, but specific details about the cause of his firing were not explicitly stated.

  • What are the core principles of Effective Altruism as described in the script?

    -Effective Altruism (EA) is described as an approach that uses evidence and reason to determine the most effective ways to benefit others and then takes action based on those findings. It began with the mission of figuring out how to assist humanity optimally using a rational and scientific method.

  • What controversy is associated with Effective Altruism according to the script?

    -The script mentions that Effective Altruism has been linked to secretive operations and potentially having different underlying agendas than stated, as evidenced by its involvement in the OpenAI controversies and connections with individuals like Sam Bankman-Fried, who faced legal issues related to financial fraud.

  • What concerns are raised about AI safety and global governance in the script?

    -The script raises concerns about proposals from figures within the Effective Altruism community advocating for a global government to manage existential risks, including AI. This includes potential overreach such as making certain technologies illegal and imposing pervasive surveillance to control AI development.

  • How did the Future of Life Institute reportedly use funds received from Vitalik Buterin according to the script?

    -The Future of Life Institute used funds from Vitalik Buterin, which came from liquidating Shiba Inu cryptocurrency tokens, to create the Vitalik Buterin Fellowship in AI Existential Safety. This was part of their broader goal to promote AI safety.

  • What legal implications are mentioned in the script regarding the development and regulation of AI?

    -The script discusses new regulatory frameworks that could grant significant power to administrators, including making certain hardware illegal, conducting raids, compelling testimony, and potentially shutting down sectors of the AI industry temporarily.

  • What are the stated goals of the Future of Life Institute as described in the script?

    -The Future of Life Institute aims to mitigate existential risks through regulatory and policy interventions. They focus on creating mechanisms and institutions that can govern AI development globally to ensure safety and prevent misuse.

  • What skepticism does the character Larry David represent in the script's narrative on technological optimism?

    -Larry David's character in the script symbolizes skepticism towards major technological advancements and investments, specifically highlighting the potential risks and downsides that often accompany new innovations, as illustrated by his dismissal of FTX in a commercial.

  • According to the script, how does the author view the duality of technology's potential for both benefit and harm?

    -The author of the script acknowledges that while technology, including AI, offers tremendous potential benefits like enhanced drug discovery and renewable energy, it also poses significant risks if not managed properly, highlighting the need for balanced and cautious advancement.

  • What is the significance of the debate between 'accelerationists' and 'anti-technology' perspectives as discussed in the script?

    -The script contrasts 'accelerationists', who believe in advancing technology rapidly to achieve a utopian future, with 'anti-technology' advocates, who argue for slowing down technological progress due to safety concerns. This debate is central to discussions on how society should handle emerging technologies like AI.

Outlines

00:00

๐Ÿ” Investigating Leaks and Effective Altruism Controversies

The paragraph starts by discussing the firing of certain OpenAI researchers, including Leopold Ashenbrenner and Pavl Ismo, for leaking confidential information. The scenario ties back to a previous incident involving the dismissal of Sam Altman from OpenAI and alludes to mysterious leaks related to something called 'QAR'. The discussion then shifts to effective altruism (EA), introducing it as a movement started by Peter Singer and others, aiming to use evidence and reason to maximize benefits to others. However, the narrative suggests that the movement may have secretive and possibly sinister aspects, particularly in relation to AI safety and hidden agendas, emphasizing a lack of transparency in their operations.

05:03

๐ŸŽญ Turmoil and Secrecy Within AI Leadership

This paragraph highlights the turmoil within AI organizations, particularly focusing on the secretive nature of the board members of OpenAI during a significant crisis involving the firing and later rehiring of Sam Altman. It details the uncommunicative and obscure handling of the crisis by board members like Ilia Sutskever and Helen Toner, who were possibly influenced by their affiliations with the effective altruism movement. Additionally, it discusses the continuous secretive stance of the organization even after external inquiries, suggesting a possible deeper agenda or conflict within the AI community.

10:05

๐ŸŒ Ideological Divides and Global AI Governance

The third paragraph examines various influential figures and their connection to the effective altruism movement and their controversial views on AI and global governance. It mentions Vitalik Buterin and Max Tegmark, highlighting significant donations to AI safety and existential risk initiatives. The text critically assesses the push for regulations that may lead to a global surveillance state, hinting at the motivations behind these moves as potentially controlling and authoritarian, rather than purely altruistic or safety-driven.

15:06

๐Ÿค– Skepticism and Criticism of AI Safety Narratives

This paragraph discusses the marketing of AI safety and effective acceleration as protective measures against existential risks, yet it hints at underlying motives of control and power within these narratives. It includes a satirical take on a Super Bowl commercial by FTX featuring Larry David, which inadvertently underscores the need for skepticism towards too-good-to-be-true offers. The narrative questions the integrity of the effective altruism movement and its leaders, suggesting that their proposed solutions might actually cloak ambitions of global dominance under the guise of humanitarian aid.

20:06

๐ŸŒ Navigating the Future of AI and Humanity

The final paragraph focuses on Vitalik Buterin's nuanced perspective on technology and AI, introducing the concept of techno-optimism, which advocates for technological progress as a force for good. It discusses different ideological views regarding the future of AIโ€”ranging from dystopian fears to utopian hopesโ€”and emphasizes the importance of cautious yet forward-thinking approaches to AI development. The discussion underscores the complex and polarized debates surrounding AI, urging a balanced understanding and careful consideration of how AI policies are shaped and implemented.

Mindmap

Keywords

๐Ÿ’กEffective Altruism

Effective Altruism (EA) is a philosophical and social movement that uses evidence and careful analysis to determine the most effective ways to benefit others. In the script, EA is discussed extensively as it morphed from an altruistic beginning to a secretive and possibly controlling movement. Critics argue that while EA started with noble intentions to maximize global good, it has been entwined with controversies such as those involving AI safety and the influence of prominent tech figures like Sam Bankman-Freed, linking EA to potentially overreaching global governance ideas.

๐Ÿ’กAI Safety

AI Safety refers to the field of study concerned with ensuring that artificial intelligence systems are safe and beneficial for humanity. The script addresses AI safety multiple times, particularly in the context of controversies surrounding Effective Altruism and the actions of figures like Sam Bankman-Freed. It describes how EA's support for AI safety might mask more intrusive aims like stringent global regulations and surveillance, suggesting a conflict between stated benevolent goals and potential hidden agendas.

๐Ÿ’กGlobal Governance

Global Governance in the script refers to the idea of creating overarching legal and political frameworks at a global level to manage or regulate issues affecting multiple countries or the entire planet, such as AI risks. It's presented as a controversial proposal by members of the EA community, potentially leading to a powerful, centralized control over technological advancements and individual state actions, raising concerns about overreach and loss of sovereignty.

๐Ÿ’กExtinction Risk

Extinction Risk is discussed in terms of the potentially catastrophic risks that could lead to the extinction of humanity, often mentioned in the context of uncontrolled AI development. The script explores the notion that influential groups may use the fear of extinction risks to justify strict global governance measures, including prohibitions and punitive actions against perceived threats, which could be abused for power consolidation.

๐Ÿ’กTechno-Optimism

Techno-Optimism is a belief system that sees technology as a key driver of future progress and solutions to current problems, including medical advancements, energy solutions, and more. In the script, this viewpoint contrasts sharply with the more restrictive views presented by AI regulation proponents, advocating for fewer restrictions on technological development to foster innovation and address global challenges effectively.

๐Ÿ’กRegulatory Overreach

Regulatory Overreach refers to the excessive imposition of regulations that exceed their intended purpose, often stifling innovation and infringing on freedoms. In the context of the video, this concept is tied to the proposed AI regulations discussed, which are seen as potentially enabling unprecedented surveillance and control over technological development, under the guise of protecting humanity.

๐Ÿ’กAI Doomer

AI Doomer refers to individuals or groups that believe AI developments will inevitably lead to catastrophic outcomes for humanity. The script mentions this perspective as part of a broader debate on AI's future impact, positioning it against more optimistic views that advocate for the benefits of AI in solving complex problems.

๐Ÿ’กSurveillance

Surveillance in the video is linked to discussions on global governance and AI safety, where it is portrayed as a potential tool for monitoring and controlling technology development excessively. The script raises concerns about the intrusive nature of proposed surveillance measures and their implications for personal and corporate freedom.

๐Ÿ’กCrypto Fraud

Crypto Fraud is discussed through the lens of the FTX scandal involving Sam Bankman-Freed, a prominent figure in the Effective Altruism movement. The script uses this incident to illustrate broader concerns about the trustworthiness of leaders within the EA and tech communities, suggesting that their public benevolent intentions might mask deeper issues of fraud and deception.

๐Ÿ’กOpenAI

OpenAI is mentioned in relation to its internal controversies and its significant role in AI development and policy discussion. The script delves into the turmoil within OpenAI, including leadership changes and secrecy, to highlight broader themes of accountability and transparency in tech organizations, especially those operating in influential sectors like artificial intelligence.

Highlights

OpenAI researchers were fired for leaking information, including Ally of Ilia and Leopold Ashen brener.

The leaks were related to AI safety and reasoning, and the researchers seem to have ties to the effective altruism movement.

Effective Altruism (EA) is a movement that started in 2011 with the aim of using evidence and reason to benefit others as much as possible.

EA has been supported by tech figures like Sam Bankman-Fried and Elon Musk, but has also been criticized for pushing a dangerous brand of AI safety.

The OpenAI board, including Adam D'Angelo and Ilia Suk, was fired in November 2023, and Sam Altman returned to run OpenAI.

Helen Toner, who was part of the effective altruism community, was behind a paper criticizing OpenAI's handling of information releases.

Asen Brener, who was fired from OpenAI, had previously worked at the Future Fund, which was started by Sam Bankman-Fried.

Vitalik Buterin, the creator of Ethereum, has shown support for EA and has donated significant amounts to the Future of Life Institute.

The Future of Life Institute has been involved in AI safety discussions and has proposed regulations for AI systems.

There are concerns about the potential for EA to create a unified global government with absolute power to enforce its views on existential risks.

The transcript discusses the potential dangers of AGI (Artificial General Intelligence) and the different views on how to approach its development.

The speaker expresses skepticism about the intentions of those who advocate for global governance and surveillance in the name of AI safety.

The concept of 'effective acceleration' is introduced as a nuanced approach to balancing technological advancement with caution and defense.

Vitalik Buterin's blog post outlines his views on the potential dangers and benefits of AI, placing himself in a category of cautious optimism.

The transcript highlights the importance of understanding different perspectives on AI development and the potential impact on society.

The speaker calls for individuals to form their own opinions on AI policies and regulations, rather than having decisions made for them.

The transcript concludes with a call to action for viewers to consider their stance on AI safety versus tech optimism and the implications for the future.

Transcripts

play00:00

I don't even know where to begin but I

play00:02

guess let's start here openi researchers

play00:05

including Ally of Ilia sover fired for

play00:09

leaking information out of open AI if

play00:11

you recall that whole November Fiasco

play00:13

with the firing of Sam Altman the qar

play00:16

leaks which have been confirmed to be

play00:18

true by the way we still don't know what

play00:20

qar is but the leaks were real well

play00:24

apparently some of the researchers

play00:25

behind some of the leaks we don't know

play00:28

specifically which ones they have been

play00:30

found and fired Leopold Ashen brener and

play00:35

pavl ismo so we're not sure what they

play00:38

leaked but it seems like they were

play00:39

working on AI safety pava was also

play00:42

working on reasoning as well as AI

play00:45

safety do you think there's a chance

play00:46

that these people have links to some

play00:49

shadowy organization that is really

play00:51

against AI so the information posted

play00:54

their pictures here Leopold and Powell

play00:57

and of course it seems like they have

play00:59

ties to the effective altruism movement

play01:02

all right but to really understand

play01:03

what's Happening Here we have to talk

play01:04

about effective altruism EA as it's

play01:08

sometimes referred to what is effective

play01:10

altruism so couple quick disclaimers

play01:12

first of all I don't know eff of

play01:14

altruism as well as I should so I am

play01:17

relying on some of this information that

play01:19

I find on the internet some of it may be

play01:21

inaccurate if I'm off about something

play01:23

I'll try to post Corrections in the

play01:25

comments or do a follow-up video but

play01:28

also at the same time I think it's it's

play01:30

very difficult to understand exactly

play01:33

what this thing is because I think while

play01:35

maybe it started as one thing maybe even

play01:38

an altruistic thing what it kind of

play01:40

morphed into I think is very different

play01:43

and as far as I can tell all of them are

play01:45

very secretive about what they do what

play01:48

their goals are it's really difficult to

play01:50

figure out what it is that they actually

play01:52

want not their stay mission of quote

play01:55

unquote help Humanity but their actual

play01:58

mission that they're trying to do what's

play02:00

the thing that they're trying to

play02:01

accomplish so it started in 2011 Peter

play02:04

Singer Toby or remember Toby or and

play02:07

William mcll and sort of their stated

play02:10

mission was using evidence and reason to

play02:12

figure out how to benefit others as much

play02:14

as possible and taking action on that

play02:17

basis so basically they wanted to think

play02:19

about how to help Humanity in the best

play02:22

way possible kind of Take the Long of

play02:24

view and kind of go about it in a

play02:26

reasonable kind of scientific method

play02:28

that's as best as I understand it which

play02:30

explain like that I would say hey yes

play02:33

this is a good group and I kind of share

play02:36

those beliefs as well we should try to

play02:38

help everyone and focus on the long term

play02:41

and think about how to do so with

play02:43

evidence and reason again the stated

play02:45

mission is not the problem here in fact

play02:48

here's William mcll a moral philosopher

play02:51

at Oxford right so he has a book what we

play02:53

owe the future here's Elon Musk saying

play02:56

worth reading this is a close match to

play02:58

My Philosophy so Elon Musk a number of

play03:00

years ago said hey this this sounds like

play03:02

a good idea right which again on the

play03:04

surface sounds like a good idea helping

play03:06

Humanity going about it in an

play03:07

intelligent way thinking in longterm

play03:10

versus short-term right here's Stephen

play03:12

Mark Ryan saying should be a good read

play03:14

will did a super fascinating podcast

play03:16

with Tim Ferris close to a decade ago

play03:18

really got me thinking I just realized I

play03:20

remember when I was when Tim Ferris

play03:22

published his first ever podcast I think

play03:24

he was very nervous about doing a

play03:25

podcast so he really hit the wine very

play03:28

very hard during that podcast yeah so it

play03:30

kind of went off the rails towards the

play03:32

end there but yeah it was close to a

play03:34

decade ago actually now it has been a

play03:36

decade and I feel very very old but the

play03:39

point I'm trying to make is it's

play03:40

important to understand that there's

play03:42

what we say we want and then there's

play03:45

what we actually do right so I'm sure we

play03:47

all have a Spam box full of various

play03:51

emails promising us wonderful things

play03:54

that at face value yeah maybe we do want

play03:57

they promise Fortune Fame and at ation

play04:00

so the headline is good but the final

play04:03

result is you having to dispute various

play04:06

credit card charges because you've been

play04:07

defrauded effective altruism started

play04:10

with a good headline Let's Help the

play04:11

world as much as possible how did it end

play04:15

well it started with Sam bankman freed

play04:17

the founder of FTX I haven't followed

play04:19

that too closely but it sounds like he

play04:21

defrauded the various crypto investors

play04:24

sounds like they're missing billions of

play04:26

dollars this is an article from wired

play04:28

effective altruism is pushing a

play04:30

dangerous brand of AI safety this

play04:32

philosophy supported by Tech figures

play04:34

like Sam bankman freed fuels the AI

play04:37

research agenda creating a harmful

play04:39

system in the name of saving Humanity so

play04:42

Sam Beckman freed is in jail or I guess

play04:45

federal prison technically he's not

play04:47

having a good time there his lawyers are

play04:50

arguing that he should have a reduced

play04:52

sentence because he's uniquely

play04:54

vulnerable to the dangers of prison

play04:57

being autistic he has a hard time

play04:59

picking up on certain social cues that

play05:02

are you know very important to survival

play05:05

in a place like that which by the way

play05:07

I'm sure is 100% true I do not doubt

play05:10

that claim however the lawyers are

play05:13

asking for his sentence to be reduced to

play05:15

5 years and I really doubt that that's

play05:18

going to work so this was the opening ey

play05:20

board in November 2023 when that whole

play05:23

Fiasco happens so we have Adam D'Angelo

play05:26

so still on the board as of right now

play05:29

founder and currently running quora so

play05:32

if you've been hearing a little bit more

play05:34

about po their little chatbot that um I

play05:37

believe is running anthropics technology

play05:39

now I think they've used both open Ai

play05:41

and anthropics Claud to run po if I

play05:43

recall correctly but he's still there

play05:46

then we have Ilia Suk who has been

play05:48

strangely silent since the whole thing

play05:51

we don't really know where he is then

play05:53

Tasha mcau and Helen toner so we think

play05:56

Helen toner is the one behind a lot of

play05:59

this there was a paper that she wrote

play06:01

criticizing openi how openi handled some

play06:04

of the releases that might have created

play06:06

a clash with Sam Alman and that's the

play06:08

thing that kind of led to this whole

play06:10

thing and Helen toner is part of the

play06:13

effective altruism Community during the

play06:15

whole open AI board cush they refused to

play06:18

talk about what was happening even

play06:20

though they were getting calls from the

play06:22

attorney general in fact the same

play06:24

attorney general that I think put Sam

play06:26

bankman freed away called them they had

play06:29

a two-hour long conversation again this

play06:32

is based on some of the leaks that we

play06:33

were seeing from open AI they refused to

play06:37

expand on what was happening they were

play06:39

still very secretive they didn't want to

play06:41

get any information out there and

play06:43

eventually that board was kicked out Sam

play06:45

mman came back to run open Ai and to

play06:48

this day I don't think they ever

play06:49

explained what they were doing for what

play06:52

reason they put out a statement that had

play06:54

some words strung together but didn't

play06:56

have any actual data they didn't have an

play06:58

explanation of is just like we regret

play07:01

the occurrence of the blah blah blah but

play07:02

it didn't say anything right I think

play07:04

this is the statement they put out

play07:05

they're saying opening admissions to

play07:07

ensure artificial general intelligence

play07:09

benefits everyone and the board has to

play07:12

prioritize this Mission accountability

play07:14

is important it's even more important

play07:15

for AGI we hope this happens as we told

play07:19

the investigators deception manipulation

play07:21

and resistance to thorough oversight

play07:23

should be unacceptable and yet they

play07:25

themselves don't seem to be very open in

play07:28

what their con concerns were what

play07:30

actually happened so for all their talk

play07:32

of accountability they're not really

play07:34

accounting for their own actions it

play07:37

seems to me like based on what I've seen

play07:39

I just haven't found anywhere where they

play07:41

talk about what their motivations are

play07:44

here's Toby or one of his books he's

play07:47

again one of the co-founders of this

play07:49

movement effective acceleration so this

play07:52

is posted by David Z Morris he's saying

play07:54

or is an unabashed advocate for unified

play07:58

Global government who decides what's an

play08:00

Extinction risk and who the hell decides

play08:02

exactly how much is necessary Extinction

play08:05

risk and this is uh from Toby ord's book

play08:08

another promising Avenue for incremental

play08:10

change is to explicitly prohibit and

play08:12

punish the deliberate or Reckless

play08:14

imposition of unnecessary Extinction

play08:16

risk international law is the natural

play08:19

place for this as those who impose such

play08:21

risk maybe well National governments or

play08:24

heads of state who could be effectively

play08:26

immune to Mere national law so it seems

play08:30

like what these people want to create is

play08:31

a unified Global government that is able

play08:34

to punish democratically elected heads

play08:37

of state of governments if they perceive

play08:39

what they're doing to be an Extinction

play08:42

risk whatever that means like how do you

play08:44

define what's an Extinction risk who

play08:46

gets to decide right I mean this seems

play08:48

to me like it would give them Absolute

play08:50

Power to jail anyone heads of state

play08:53

people running the country hopefully

play08:56

elected democratically to just put them

play08:58

away to remove them from the post or put

play09:00

them in jail with no explanation other

play09:02

than you pose an Extinction risk so

play09:05

going back to Ashen brener and ISM of

play09:08

Ashen brener graduated from Columbia

play09:10

University and has previously worked at

play09:12

the future fund a fund started by the

play09:15

former FTX Chief Sam bankman freed again

play09:17

that's the guy that's in jail and has

play09:20

his team of lawyers actively trying to

play09:22

reduce that sentence but that fund was

play09:25

aimed to finance project to improve

play09:27

Humanity's long-term prospects then year

play09:29

ago Asen brener joined open AI right and

play09:32

several of the board members who fired

play09:34

Alman had also ties to effective

play09:36

altruism Tasha makali is a board member

play09:38

of effective Ventures parent

play09:40

organization of the Center for Effective

play09:42

altruism and Helen toner previously

play09:43

worked at the effective altruism focused

play09:46

open philanthropy project and of course

play09:48

both left the board when Alman returned

play09:50

as CEO so this is vitalic butterin so he

play09:54

is the guy behind ethereum ethereum has

play09:57

for most of the time been the number two

play10:00

biggest and most successful

play10:02

cryptocurrency after Bitcoin I don't

play10:04

track that stuff too closely nowadays

play10:06

but it used to be I think it's fair to

play10:09

say that most of the time it was number

play10:10

two it probably is now yeah I figured I

play10:13

checked so yes it's number two and this

play10:15

is Max tigar future of Life Institute so

play10:19

another person that seemingly sort of

play10:20

associated with EA CU future of life and

play10:23

EA seem linked so in May 2021 metallic

play10:27

butterin Burns 80% of his ship holding

play10:30

and uses the remaining 24 long-term

play10:32

charitable causes so shibba Inu is one

play10:35

of those crazy doggy cryptocurrency it

play10:38

doesn't really matter the point is he

play10:39

gives a lot of money to the future of

play10:42

Life Institute we're talking to the tune

play10:44

of

play10:45

755 million so not quite a billion but

play10:49

uh still quite a bit future of Life

play10:52

Institute uses FTX so Sam bankman freed

play10:55

his company that defrauded investors out

play10:57

of billions it liquidates the ship

play11:00

tokens so it sells it basically converts

play11:02

it into dollars I assume and they use

play11:06

that money to create the valal butterin

play11:08

fellowship and AI existential safety

play11:10

everyone Pats themselves on the back the

play11:12

future of Life butterin shibba enu

play11:14

Community sang B freed Alam research

play11:17

right here's Max tear vitalic then

play11:19

November 2022 whoops the collapse of bet

play11:22

sang bankman freed FTX and Alam research

play11:26

due to fraud allegations boy they got so

play11:29

lucky that they cashed out sounds like

play11:31

they were able to get their money out in

play11:33

time the future of Life Institute uh

play11:35

posts for the EU in the transparency

play11:39

register listing musk Foundation as the

play11:42

top contributor of you know 3 million it

play11:44

looks like but where's the nearly a

play11:46

billion dollars from shibba Ino well it

play11:50

lacks that amount the amount they

play11:52

liquidated from the 2021 Shiba Inu

play11:54

donations since the audit is still in

play11:56

progress and the yearly budget presents

play11:58

musk Foundation ation donation as The

play12:00

prominent one right so the 3 and5

play12:02

million from musk is listed as the top

play12:04

one not the close to a billion dollars

play12:07

from shibba Ino then of course we have

play12:09

that PA AI experiments the open letter

play12:13

right everyone points to musk as the

play12:15

person that funded the foundation right

play12:18

then the update in 2023 the donation

play12:21

listing the 600,000 minor cryptocurrency

play12:24

I guess it went down in value since the

play12:26

donation in 2023 future Life Institute

play12:30

participates in the UK AI safety Summit

play12:32

te Mark addresses the US Congress and

play12:35

the EU AI act this past they've pushed

play12:38

it through allowing for the regulation

play12:40

of general purpose AI systems here's an

play12:42

interview in one of the future of Life

play12:44

institutes co-founders talking about how

play12:47

they view protecting the world from AI

play12:51

basically by making the hardware illegal

play12:54

and subjecting you know software the

play12:56

code that people write to pervasive

play12:57

surveillance on a global scale take a

play13:00

listen I do think that governments

play13:02

certainly governments can make things

play13:04

illegal well you can make Hardware

play13:06

illegal you can also say that yeah

play13:09

producing graphics cards above certain

play13:12

capability level is now illegal and

play13:14

suddenly you have like much much more

play13:16

Runway as a civilization do you get into

play13:19

a territory of having to put

play13:22

surveillance on what code is running in

play13:25

a data center yeah I mean regulating

play13:28

software is much much more harder than

play13:30

Hardware if you like let the mor law to

play13:33

continue then like the surveillance has

play13:35

to be more and more pervasive so my

play13:38

focus for the foreseeable future will be

play13:41

on kind of regulatory interventions I'm

play13:43

kind of like trying to educate lawmakers

play13:47

and kind of helping and perhaps hiring

play13:50

lobbyists to try to make the world safer

play13:54

now the future of Life Institute has a

play13:55

new grant program for Global governance

play13:58

mechanism and institutions he wants to

play14:01

ban the creation agis and have various

play14:04

surveillance mechanisms and this year

play14:07

future of Life Institute tells Politico

play14:09

that its efforts support Common Sense

play14:12

regulations but what they're talking

play14:14

about is Banning gpus these Nvidia cards

play14:18

above a certain capacity those should be

play14:20

made illegal what software people rights

play14:22

should be surveilled and if you also add

play14:25

to that fact the idea that or one of the

play14:27

co-founders of effective altruism is

play14:30

talking about having some sort of a

play14:31

global government that's above heads of

play14:34

state above government that's able to

play14:37

jail people for you know creating

play14:39

existential risks which again is very

play14:42

vague They Don't Really Define what an

play14:44

existential risk is they don't really

play14:46

talk through why they think AI might

play14:49

kill everyone but it seems that they're

play14:51

just pushing for regulation for having

play14:55

political power Global political power

play14:58

my spam boox is full of very attractive

play15:00

sounding headlines but in reality what

play15:04

they want is to rip me off and take my

play15:06

money same Sam bankman freed and the FTX

play15:10

thing they wanted to help everyone get

play15:12

wealthy and help the world but ended up

play15:14

just ripping everybody off and losing

play15:17

billions of dollars of investor funds

play15:20

now these people are saying that they

play15:21

want to save us from certain Doom

play15:25

certain Extinction from AGI effective

play15:28

ACC ation wants to help Humanity right

play15:31

that's the headline what is the actual

play15:34

thing that's going to happen there is

play15:35

this funny commercial that was made by

play15:37

FTX for the Super Bowl it was funny then

play15:40

it's hilarious now because it featured

play15:43

Larry David and Larry David was a

play15:46

skeptical character he dismissed major

play15:48

innovations that happened throughout

play15:50

history like the wheel the fork the

play15:52

toilet and now he's dismissing the

play15:54

cryptocurrency exchange FTX the whole

play15:57

point of the commercial is Don't Be Like

play15:59

Larry invest with FTX what's funny here

play16:02

is we should be like Larry and I don't

play16:05

mean the real person Larry David who

play16:07

himself sounds like lost a whole bunch

play16:09

of money on crypto cuz his salary was in

play16:11

crypto they paid him in crypto can you

play16:13

believe that I'm talking about be like

play16:15

Larry this mythical person that can

play16:18

smell the BS when he sees it your email

play16:21

spam box full of women that want to meet

play16:23

you is a good headline but it's fraud

play16:27

they just want to get your money

play16:29

companies like FTX that say that they

play16:31

want to make you rich it's fraud they

play16:33

just want to take your money the people

play16:35

that are saying that they want to

play16:36

protect you from extinction by this

play16:39

scary software say it with me it's fraud

play16:43

do you want to install a global

play16:45

governance mechanism ban and jail anyone

play16:48

that disagrees with them probably

play16:50

because they believe that they can

play16:52

install themselves at the very top and

play16:54

become the absolute Kings of the world I

play16:58

hate to break it to you but these aren't

play17:00

the good guys now I have to say here so

play17:02

in regards to vitalic butterin I was

play17:05

kind of surprised that he was caught up

play17:07

in this he didn't strike me as one of

play17:10

those people and maybe this is me being

play17:12

naive maybe this is me being a little

play17:14

bit too trusting but to me the jury's

play17:17

still out on this guy and he posted this

play17:19

image which I thought was excellent I

play17:21

try to do my best to not go all in on

play17:25

any specific view I like to be a little

play17:27

bit more neutral I I have my biases I

play17:29

have my opinions I have my preferences

play17:32

but I think now more than ever it's

play17:33

important to try to understand the

play17:36

different opinions the different sides

play17:39

you can have your preference but at

play17:40

least understand where the other side is

play17:43

coming from one view is the anti-

play17:45

technology view it's this idea that

play17:47

safety is behind and dystopia ahead and

play17:50

there's quite a number of people that

play17:51

kind share this view certainly the

play17:53

people that we've talked about today

play17:55

seem to see it in this fashion or at

play17:57

least they say say they do right

play17:59

dystopia ahead right AI will kill

play18:02

everyone AI will turn us all into paper

play18:04

clips some people are saying it won't

play18:06

just destroy humanity and Earth but our

play18:09

entire universe will take over and turn

play18:11

into paper clips or whatever other

play18:13

scenario they Envision and that safety

play18:16

is is behind us we have to kind of stop

play18:19

technological progress decelerate learn

play18:22

to live with less right less food less

play18:24

Comfort less air conditioning and and

play18:27

move backwards into time a lot of these

play18:30

beliefs kind of also overlap with this

play18:32

idea of depopulation right this is one

play18:34

thing that Elon Musk kind of rails

play18:36

against he's saying no we need more

play18:38

people more kids we need the Next

play18:40

Generation and he's kind of fighting

play18:41

against the forces they saying no we

play18:43

need less people we need to reduce

play18:45

Earth's population and by the way if

play18:47

you're not following some of this these

play18:49

are real conversations that that some

play18:50

people are having including people that

play18:52

wield a lot of political power a lot of

play18:54

influence a lot of capital but that's

play18:56

the anti-technology view right Outlaw

play18:59

gpus and have a global worldwide

play19:03

surveillance on software because if we

play19:05

keep going down this path there's Doom

play19:07

ahead right and there's the

play19:09

accelerationist view that there's

play19:11

dangers behind and Utopia ahead so right

play19:13

now we're seeing a lot of progress with

play19:15

AI for example doing drug Discovery

play19:17

there's more and more overlap between

play19:19

genomics and AI so potentially we could

play19:22

cure some heart to cure diseases we can

play19:25

have people live longer we can have more

play19:27

targeted drugs that that help people

play19:29

heal without the side effects we're

play19:32

potentially could be seeing our first

play19:34

commercially viable fusion power plant

play19:36

that will make energy very cheap people

play19:38

are talking going to colonize other

play19:41

planets kind of removing the risk of

play19:43

potentially being just on a single

play19:45

planet as a species or one unlucky

play19:48

meteor can take everyone out potentially

play19:51

right so these people view advancing

play19:53

technology as the right way and slowing

play19:57

down and letting the crippling

play19:59

regulations and these World governments

play20:02

ruled by people that maybe we don't

play20:04

agree with on everything I think we can

play20:06

say that maybe those are the dangers the

play20:09

authoritarian governments worldwide

play20:12

surveillance Etc and then we have the

play20:14

third system and that is what vitalic

play20:17

butterin is saying that's my view that

play20:18

there's dangers behind and multiple

play20:20

paths forward ahead some good some bad

play20:23

and this at least I can kind of agree

play20:25

with the path forward has wonderful

play20:28

amazing promises it has some dangers

play20:30

potentially but I'm going to be 100%

play20:32

honest and I'll come out and say this

play20:34

the people with this Viewpoint scare me

play20:37

the most the people that want to install

play20:40

a global authoritarian surveillance

play20:42

regime that is bigger than governments

play20:45

in order to protect us from something

play20:47

vague that they can't even fully

play20:49

describe that scares me because even if

play20:51

they are sincere and they're good people

play20:54

and they're super duper nice and they

play20:55

want the best for people well the next

play20:58

generation that takes over may not be

play21:01

and eventually we're going to run into

play21:02

somebody that's going to use it for

play21:04

something bad and at that point it will

play21:06

be too late to do anything about it but

play21:08

back to italic my techno optimism this

play21:10

blog post that he wrote it is Big it's

play21:14

very very very it's huge it's pages and

play21:17

pages and pages of notes and bullet

play21:19

points and various uh charts and graphs

play21:22

and whatever its table of contents is

play21:25

like a page long his post also mentions

play21:28

mark andreon as one of the faces behind

play21:30

techno optimism the people that believe

play21:33

that technology that AI will help the

play21:35

world he is by the way one of the main

play21:37

guys behind a16z and reent Horowitz they

play21:41

wrote this techno Optimist Manifesto on

play21:44

the a6z website and they believe that

play21:46

advancing technology is one of the most

play21:48

virtuous things that we can do they

play21:51

believe in ambition aggression

play21:53

persistence relentlessness strength they

play21:56

believe in Merit and achievement they

play21:59

believe in Pride confidence and

play22:01

self-respect when earned they believe in

play22:04

free thought free speech and free

play22:06

inquiry they believe in the actual

play22:08

scientific method and the enlightenment

play22:10

values of free discourse and challenging

play22:14

the authority of experts they believe in

play22:16

as Richard fan says science is the

play22:19

belief in the ignorance of experts and I

play22:23

would rather have questions that can't

play22:24

be answered than answers that can't be

play22:27

questioned and they have enemies and I

play22:30

quote we have enemies our enemies are

play22:32

not bad people but rather bad ideas

play22:35

those enemies Go by different names

play22:38

existential risk degrowth their enemy is

play22:41

stagnation corruption regulatory capture

play22:45

their enemy is speech control and

play22:47

thought control they're saying our enemy

play22:50

is deceleration degrowth depopulation

play22:54

the nihilistic wish so trendy among our

play22:57

Elites for fewer people less energy and

play23:01

more suffering and death so I might go

play23:04

back and read the metallic post try to

play23:08

understand where he's coming from but a

play23:10

quick AI summary that I did makes it

play23:12

seem that he is indeed somewhere in

play23:14

between he is in fact somewhere here he

play23:17

believes that there are specific dangers

play23:19

ahead specific very good paths ahead and

play23:22

of course this bear behind us means that

play23:25

he believes that technology should

play23:26

Advance he believes that AI should grow

play23:29

with humans that we should be integrated

play23:31

with AI it has some pretty Nuance takes

play23:34

on these whole ideas of what EA is what

play23:37

eak is so eak is of course effective

play23:41

acceleration so in that uh Andre and

play23:43

Horowitz a16z their patron saints of

play23:47

techno optimism I mean the first person

play23:49

on there and I think also the second is

play23:51

one of the leaders of that effective

play23:53

acceleration or eak movement so to me I

play23:56

think vitalica is trying to be very

play23:58

nuanced in a very polarized world I

play24:01

think he's somebody that thinks pretty

play24:03

deeply about this stuff but I just can't

play24:05

see him as an anti-technology person we

play24:09

believes that technology is amazing and

play24:11

there are very high costs to delaying it

play24:14

there's this interesting chart he posted

play24:16

so it's kind of like the different

play24:17

quadrants on the right you have agis

play24:20

coming soon on the left not very soon

play24:23

and down you have you know the risk of P

play24:25

Doom so all feature value likely to be

play24:28

destroyed by misaligned AGI if you're an

play24:31

AI Doomer or not basically and towards

play24:33

the top it's unlikely that AGI will

play24:35

destroy everybody and uh they're saying

play24:38

here this is not serious it's just

play24:39

guesswork where everybody is but you can

play24:42

see Sam Alman and he's saying AGI is

play24:44

coming soonish and it's unlikely that

play24:47

it's going to destroy everybody you have

play24:48

the founder of Google up there like it's

play24:52

highly unlikely that it's going to

play24:53

destroy us right they're very very

play24:55

positive about it of course at the very

play24:57

bottom you have a owski probably the

play24:59

most well-known AI Doomer you know Demi

play25:03

s habis who is part of Google deep M who

play25:06

they've placed into you know more of the

play25:09

you know let's say he's cautious he's a

play25:11

little bit towards yeah there could be

play25:13

problems like we have to be careful Yan

play25:15

Lon is very positive Andrew a very

play25:18

positive that it's not going to destroy

play25:20

us interestingly Gary Marcus is highly

play25:22

on here but he tends to think that AI is

play25:24

not going to be very effective and again

play25:25

a lot of this is just guesswork it's not

play25:27

serious in any way but looks like

play25:29

metalic placed himself in the category

play25:32

that egi is not coming anytime soon and

play25:35

it's unlikely to destroy so he's not

play25:37

concerned but he thinks that there's a

play25:38

chance he's maybe a little bit concerned

play25:41

he places his P Doom so the probability

play25:43

of something horrible happening

play25:45

existential risk as

play25:47

0.1 he's saying you don't have to buy

play25:49

the story but in my opinion it's worth

play25:52

worrying about and he's saying his

play25:54

philosophy is Dak d/ ACC and a podcast

play25:59

on bankless he talks about Dak what it

play26:02

stands for so the D is defensive or kind

play26:05

of accelerating but defensively so

play26:08

carefully but also kind of stands for

play26:11

decentralization as in getting away from

play26:14

one potentially authoritarian government

play26:18

or some Central system pulling the

play26:20

strings of everything and everybody so

play26:23

I'll post a survey down below somewhere

play26:25

that will allow you to vote to kind of

play26:27

show where you are on this whole thing

play26:29

do you think we should accelerate

play26:30

technology as much as we can accelerate

play26:32

AI because there's more danger in

play26:34

slowing down than there is in

play26:35

accelerating are you more in line with

play26:38

the whole world government controlling

play26:41

everything surveilling everything and

play26:44

just giving them Absolute Power because

play26:46

only they can protect us from death by

play26:49

AI I mean I'm sure there are some people

play26:51

that believe that or do you think that

play26:54

maybe we do need to accelerate but

play26:55

defensively cautiously maybe you're

play26:57

somewhere in between let me know I'm

play26:59

curious where people fall on that

play27:01

spectrum because I think these questions

play27:04

are going to be more and more relevant

play27:07

as you can see there are well-funded

play27:09

organizations that are trying to push

play27:11

through these regulations they've

play27:13

succeeded in the EU and they're trying

play27:15

here in the US as well they want to

play27:18

control all forms of software anything

play27:20

they could use neural Nets they want to

play27:22

control search engines or anything that

play27:24

predicts the demand Supply price cost or

play27:28

Transportation needs of products or

play27:29

Services their powers are said to be

play27:31

very open-ended so not a Ru making

play27:34

process or a due process just give them

play27:37

all the power and they will protect you

play27:39

over and over the legislation has this

play27:41

oneway ratchet Clause the administrator

play27:43

has the freedom to make rules stricter

play27:46

without any evidence but has to prove a

play27:48

negative to relax any rules so easy to

play27:51

gain more power but hard to give any of

play27:54

it up no open- Source software if it

play27:56

doesn't get a government okay it cannot

play27:58

be continued if you buy sell gift

play28:01

receive trade or transport even one

play28:03

covered chip like an Nvidia card that is

play28:06

covered under this this act well then

play28:08

you have committed a crime and this

play28:10

Frontier artificial intelligence system

play28:12

administration can straight up compel

play28:14

testimony and conduct raids for any

play28:16

investigation or proceeding including

play28:18

speculative proactive investigations

play28:20

there's a massive criminal liability

play28:22

section not just for people you know

play28:24

doing the math and doing the AI but also

play28:27

any officials who don't do their jobs

play28:29

and here's the kicker emergency Powers

play28:32

the administrator of this organization

play28:34

that they plan to create can on his own

play28:36

authority can shut down the frontier AI

play28:38

industry for 6 months or I'm reading it

play28:41

here as 60 days unless they are

play28:43

confirmed by the president or Congress

play28:45

and then that can extend it to one year

play28:47

they can take full possession and

play28:49

control of specified locations or

play28:50

equipment related to Ai and the

play28:52

administrator can conscript troops so

play28:55

you can basically raise an army to fight

play28:57

the nerds that are putting together

play28:59

various AI software also of course all

play29:02

other agency have to consult this agency

play29:05

if they're doing any AI enforcement

play29:06

stuff they amend the antitrust law to

play29:09

give the administrator a near veto on AI

play29:11

mergers and they can use whatever

play29:13

funding they can get their hands on

play29:14

including the fines imposed and

play29:16

donations so wherever you are in the

play29:19

world I think you should figure out

play29:21

where you stand on these policies on AI

play29:25

safety versus Tech optimism Who are the

play29:28

the good guys who are the bad guys you

play29:30

should decide otherwise the decision

play29:33

will be made for you with that said my

play29:35

name is Wes R and thank you for watching

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
AI SafetyEffective AltruismTech IndustryEthical AIGlobal GovernanceOpen AISam Bankman-FriedVitalik ButerinRegulatory PoliciesAI Ethics