AI NEWS OpenAI vs Helen Toner. Is 'AI safety' becoming an EA cult?

AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
31 May 202429:34

Summary

TLDRThe video discusses recent controversies surrounding OpenAI, focusing on the dismissal of Sam Altman and the subsequent fallout. It examines claims made by former board member Helen Toner, who alleges being kept in the dark about AI developments and accuses Altman of a history of deceit. The video also critiques the effective altruist movement's influence on AI safety, highlighting their extreme views on halting AI progress and the potential for global surveillance. The narrative questions the motives behind these actions and urges viewers to consider the broader implications of letting a vocal minority dictate AI regulation.

Takeaways

  • 🗣️ A former OpenAI board member, Helen Toner, has spoken out about the circumstances surrounding Sam Altman's firing, sparking controversy and debate within the AI community.
  • 🔍 Helen Toner claimed that she and others were kept in the dark about significant developments at OpenAI, such as the launch of Chat GPT, which they only learned about through Twitter.
  • 🚫 OpenAI's current board has refuted Helen Toner's claims, stating that they commissioned an external review which found no evidence of safety concerns leading to Sam Altman's departure.
  • 👥 The debate has become somewhat tribal, with people taking sides and supporting the narratives that align with their pre-existing views rather than objectively assessing the situation.
  • 💡 There are concerns that the conversation around AI safety is being dominated by a minority with extreme views, potentially skewing the direction of AI regulation and research.
  • 🌐 Some individuals within the effective altruist movement are pushing for stringent global regulations on AI development, including bans on certain technologies and surveillance measures.
  • 🕊️ The term 'AI safety' has been co-opted by groups with apocalyptic views on AI, leading to confusion and a tarnishing of the term for those working on legitimate safety concerns.
  • 💥 There is a risk that the focus on existential risks from AI could overshadow more immediate and tangible concerns about AI's impact on society and the need for practical safety measures.
  • 📉 The influence of certain organizations and individuals with extreme views could have negative repercussions on the AI industry, potentially stifling innovation and progress.
  • 🌟 The video script emphasizes the importance of balanced and evidence-based discussions around AI development and safety, rather than succumbing to fear-mongering or cult-like ideologies.

Q & A

  • What is the main controversy discussed in the video script?

    -The main controversy discussed is the dismissal of Sam Altman from OpenAI and the subsequent claims and counterclaims made by various parties, including Helen Toner, an ex-board member, and the current OpenAI board.

  • What was Helen Toner's claim about the Chad GPT revelation?

    -Helen Toner claimed that she and the board learned about Chad GPT on Twitter, suggesting they were kept in the dark about this significant AI breakthrough.

  • How did OpenAI respond to Helen Toner's claims?

    -OpenAI responded by stating they do not accept the claims made by Helen Toner and another board member. They commissioned an external review by a prestigious law firm, which found that the prior board's decision did not arise from product safety or security concerns.

  • What is the significance of GPT 3.5 in the context of the video?

    -GPT 3.5 is an existing AI model that was available for more than 8 months before the release of Chat GPT. It signifies that the technology behind Chat GPT was not new, but its user interface and format as a chat application became popular.

  • What was the claim made by Helen Toner about Sam Altman's past?

    -Helen Toner claimed that Sam Altman had a history of being fired for deceitful and chaotic behavior, including from Y Combinator and his original startup, Loopt.

  • How did Paul Graham, the founder of Y Combinator, respond to the claim about Sam Altman's dismissal from Y Combinator?

    -Paul Graham clarified that Sam Altman was not fired but rather agreed to step down from Y Combinator to focus on OpenAI when it announced its for-profit subsidiary, which Sam was going to lead.

  • What is the concern regarding the influence of the Effective Altruism (EA) movement on AI policy?

    -The concern is that the EA movement, with its belief in the imminent risk of AI superintelligence and potential existential threats, may be pushing for extreme regulatory measures that could stifle innovation and progress in AI.

  • What is the view of some researchers and experts on the existential risk posed by AI?

    -Some researchers and experts believe that while existential risks could emerge, there is currently little evidence to suggest that future AIs will cause such destruction, and more pressing, real-world concerns about AI should be addressed.

  • What is the criticism of the EA movement's approach to AI safety?

    -The criticism is that the EA movement has hijacked the term 'AI safety' and focuses on extreme doomsday scenarios, which overshadows more practical and grounded concerns about AI's impact on society and the need for sensible regulations.

  • What is the argument made by the video script against the extreme regulatory measures proposed by some AI safety advocates?

    -The argument is that extreme measures, such as global bans on AI training runs or surveillance on GPUs, are not rational and could have disastrous consequences, such as nuclear conflict, which should not be the basis for governing and regulating AI development.

Outlines

00:00

🤖 AI Controversy and Board Member's Claims

The video delves into the controversy surrounding the dismissal of Sam Altman from OpenAI, with a focus on the claims made by Helen Toner, an ex-board member. It discusses the community's divided opinion and the 'bombshell' revelations from Toner's interview, such as the allegation that the board was kept in the dark about 'Chat GPT' until it was revealed on Twitter. The video also mentions the response from the current OpenAI board, which refutes Toner's claims and highlights an external review conducted by the law firm WilmerHale that found no evidence of AI safety concerns leading to Altman's departure.

05:00

🔍 Misrepresentations and the Reality of AI Developments

This paragraph addresses perceived misrepresentations made by Helen Toner regarding the OpenAI situation. It clarifies that the technology behind 'Chat GPT' was not a secret and had been available for months, suggesting that Toner's claim of being informed about it on Twitter might be an exaggeration. The paragraph also refutes the claim that Sam Altman was fired from Y Combinator for deceitful behavior, with Paul Graham, a mentor to Altman, clarifying that Altman's move was a mutual decision to focus on OpenAI rather than a dismissal.

10:01

🧐 The Influence of Effective Altruism on AI Policy

The video script discusses the influence of the Effective Altruism (EA) movement on AI policy, suggesting that some members have extreme views on AI safety, such as the belief in an imminent AI superintelligence that could lead to humanity's extinction. It raises concerns about the EA's approach to AI regulation, which includes ideas like banning certain hardware and enforcing global surveillance to prevent AI development. The paragraph also highlights the potential negative impact of these beliefs on the broader conversation around AI safety and policy.

15:02

💡 The Distortion of AI Safety Discourse

This section of the script criticizes the distortion of the AI safety discourse by certain groups with extreme views on AI, which it associates with the Effective Altruism movement. It argues that these groups are overshadowing more grounded and pressing concerns about AI's real-world applications and potential harms, such as the impact on marginalized communities. The video calls for a more balanced and evidence-based approach to AI safety, rather than one driven by fear of an AI apocalypse.

20:02

🛑 The Risks of Overzealous AI Regulation

The speaker expresses concern over the potential risks associated with overzealous AI regulation, particularly that advocated by certain groups within the Effective Altruism movement. The paragraph outlines extreme regulatory measures such as making hardware illegal or imposing pervasive surveillance on data centers. It emphasizes the need for a more nuanced and practical approach to AI regulation that doesn't stifle innovation and progress.

25:02

🌐 The Importance of Openness in AI Development

In this paragraph, the script highlights the benefits of open-source AI and the importance of sharing knowledge to improve security and prevent vulnerabilities, drawing a parallel with cybersecurity. It contrasts this with the views of a minority advocating for extreme measures such as global surveillance and potential military conflict to halt AI development. The speaker argues against letting such extreme perspectives govern and regulate AI, advocating for a balanced and rational approach to its development and safety.

🚀 Balancing Progress and Caution in AI Development

The final paragraph wraps up the discussion by emphasizing the need to balance progress and caution in AI development. It criticizes the extreme views that suggest halting all AI progress and instead calls for a reasoned, evidence-based approach to managing risks. The speaker encourages viewers to consider multiple perspectives on AI safety and to be wary of letting cultish ideologies dictate the future of AI regulation and development.

Mindmap

Keywords

💡Open AI

Open AI is a research laboratory that focuses on developing artificial intelligence (AI) technologies. In the context of the video, it's the organization where significant events have transpired, including the departure of Sam Altman and the subsequent controversy. The script discusses the internal conflicts and the board's response to the events that unfolded at Open AI.

💡Sam Altman

Sam Altman is a prominent figure in the tech industry and was a key player at Open AI. The video script revolves around his departure from Open AI and the ensuing debate over the circumstances of his firing. His character and actions are central to the narrative being discussed.

💡Helen Toner

Helen Toner is mentioned as an ex-board member of Open AI who has spoken out about the events surrounding Sam Altman's departure. Her perspective is presented as one of the key sources of information about the internal happenings at Open AI, and the script discusses her claims and their reception.

💡AI Regulation

AI Regulation is a central theme in the video, which discusses the need for oversight and control over AI development. The script references Eliza Yosi's views on regulating AI, suggesting an international arrangement to monitor and license AI training technologies, reflecting on the broader conversation about how to manage the progress and safety of AI.

💡AGI (Artificial General Intelligence)

AGI refers to a form of AI that possesses the ability to understand, learn, and apply knowledge across a broad range of tasks at a level equal to or beyond that of a human. In the script, the concern about AGI's potential risks and the need to manage its development are highlighted, particularly in relation to the discussion on AI safety and regulation.

💡Effective Altruism

Effective Altruism is a philosophical and social movement that uses evidence and reason to determine the most effective ways to benefit others. The script mentions this movement in relation to certain individuals and organizations that have a significant influence on AI policy and the narrative around AI safety, suggesting a potential bias towards apocalyptic AI scenarios.

💡AI Safety

AI Safety is a field of study focused on ensuring that AI development is conducted in a manner that avoids potential harm to humanity. The video discusses how some groups have co-opted the term to push an agenda that may not be representative of the broader AI community, leading to a distorted view of AI risks.

💡EA (Existential AI Risks)

EA, in the context of the video, refers to groups concerned with existential risks posed by AI, which are risks that could lead to the extinction or irreversible degradation of humanity. The script critiques this perspective as being overly alarmist and potentially detrimental to a balanced conversation on AI development and policy.

💡WilmerHale

WilmerHale is a prestigious law firm mentioned in the script as having been commissioned to conduct an external review of the events leading up to Sam Altman's resignation from Open AI. Their findings are significant as they contradict some of the claims made by Helen Toner and others.

💡Chad GPT

Chad GPT is a term used in the script to refer to a demo or iteration of AI technology that gained significant attention. The script discusses the controversy around its release and the claims that board members were kept in the dark about it, which is a point of contention in the narrative.

💡Y Combinator

Y Combinator is a well-known startup accelerator that has been influential in the tech industry. The script references Sam Altman's history with Y Combinator, specifically addressing claims about his departure and the actual circumstances that led to him focusing on Open AI instead.

Highlights

Ex-board member of OpenAI, Helen Toner, speaks out on Sam Altman's firing.

OpenAI's response to Helen Toner's claims, denying the allegations made by her.

External review by law firm WilmerHale found no AI safety concerns led to Altman's departure.

Chad GPT's release was not a secret, contrary to Toner's claims; it was based on existing GPT 3.5 technology.

Paul Graham clarifies Sam Altman was not fired from Y Combinator, contradicting media narratives.

Critique of Helen Toner's actions and EA (Effective Altruism) movement's influence on AI policy.

Concerns about EA's extreme views on AI regulation, including global bans and surveillance.

EA's shift from humanitarian goals to focusing on AI doomsday scenarios.

Debate on the validity of EA's claims about AI's potential to cause human extinction.

Impact of EA-funded research on the AI policy landscape in Washington.

Criticism of EA's approach to AI safety versus other researchers with different focuses.

Discussion on the need for balanced AI regulation that accounts for both risks and benefits.

The importance of not letting extreme views dominate AI development and policy.

EA's self-identification as part of AI safety and the confusion it causes in the field.

The potential consequences of EA's influence on global AI development and international relations.

The contrast between EA's apocalyptic views and more nuanced perspectives on AI's future.

The call for a rational and balanced approach to AI development and its associated risks.

Transcripts

play00:00

big AI news today so an ex-board member

play00:03

of open AI comes out talking about what

play00:05

actually happened during Sam alman's

play00:07

firing we covered that a few days ago

play00:09

her name is Helen toner but today we get

play00:12

the opening eye response as well as some

play00:15

other people that weigh in on whether or

play00:17

not some of the claims that she makes

play00:19

are truthful or not so let's take a look

play00:22

at that but before we do take a listen

play00:24

to this 30 second clip of Eliza Yosi and

play00:27

tell me do you agree with what he's

play00:30

saying do you agree with how he thinks

play00:34

we should regulate AI how you feel about

play00:36

what you will hear today will greatly

play00:38

depend on whether you agree with what he

play00:41

is saying should this type of research

play00:43

and development be made against the law

play00:45

yeah basically I think that we should

play00:47

track all the gpus have international

play00:50

arrangements for all of the AI all the a

play00:52

training Tech to end up in only

play00:55

monitored supervised licensed data

play00:58

centers and Allied countries and and

play01:00

just like not permit training runs more

play01:02

powerful than GPT 4 like that whole line

play01:05

of reasoning is not that regulating the

play01:07

stuff will protect you from a super

play01:08

intelligence because it will not that's

play01:10

more in the hopes that people change

play01:12

their minds later maybe after some major

play01:15

disaster that doesn't kill everyone you

play01:16

don't like press the off switch to deal

play01:18

with the super intelligence the super

play01:19

intelligence does not let you know that

play01:21

you need to press the off switch until

play01:22

you are already dead AGI rolls around

play01:24

only once subscribe so Helen toner comes

play01:28

out spilling the beans on what happens

play01:30

during that whole opening ey Fiasco

play01:31

where Sam Alin got fired tons of people

play01:33

had to be brought in to kind of manage

play01:35

the situation and figure out who's going

play01:37

to be running open AI moving forward one

play01:40

thing that's obvious to me is that the

play01:41

community the people following this you

play01:44

I everybody else were very divided on

play01:47

what actually happened who is at fault

play01:49

who is to blame who's telling the truth

play01:52

who's being honest so the kind of big

play01:54

bombshell revelations in Helen toner's

play01:56

interview were the following the biggest

play01:58

one that I think a lot of people quoting

play02:00

is the fact that they learned about Chad

play02:01

GPT on Twitter that was a line that she

play02:04

used to kind of point out that they were

play02:06

being kept in the dark about chadt this

play02:09

great new breakthrough in AI technology

play02:11

also she mentioned that Sam had a

play02:13

history of being fired for his deceitful

play02:15

and chaotic Behavior as she puts it

play02:18

maybe she was quoting somebody but she

play02:20

was saying he was fired from why

play02:22

combinator he was fired from looped his

play02:24

original startup and also that Sam

play02:26

didn't inform the board that he owned

play02:28

the open AI startup fund now we've

play02:30

covered that interview already but since

play02:32

then there's been a lot more sort of

play02:34

Revelations about what's actually been

play02:36

happening some people had a chance to

play02:38

kind of reply back to some of these

play02:40

allegations so today let's take a look

play02:42

at it now unfortunately I know that a

play02:44

lot of people this seems to be becoming

play02:45

kind of a tribal thing right so you have

play02:48

red team blue team and you're just

play02:50

rooting for the people that you want to

play02:51

win I don't know if that's the right

play02:53

approach here I think as you'll find out

play02:54

there's a lot more gray area here than

play02:56

at first meets die but let's take a look

play02:59

so first of all all here is the opening

play03:01

ey board the current one responds to

play03:03

Helen toner about her claims they're

play03:05

saying we do not accept the claims made

play03:07

by Miss toner and Miss mcau regarding

play03:09

events at open AI so those were the two

play03:11

board members that were kicked off after

play03:13

the firing of Sam Albin so they're

play03:15

saying the first step we took was to

play03:17

commission an external review of events

play03:19

leading up to Mr Altman's forced

play03:20

resignation and this is true they hired

play03:23

a as they call a prestigious Law Firm

play03:24

we've covered this when this happened

play03:26

Wilmer hail they led the review they

play03:28

conducted dozens of interview with

play03:30

members of open ey's previous board

play03:32

including Miss Stoner and Miss mcau

play03:33

openi executive advisers to the previous

play03:35

board and other pertinent Witnesses

play03:37

reviewed more than 300,000 documents and

play03:39

evaluate various corporate actions and

play03:42

both mm mcau provided ample input to the

play03:44

review there you find it rejected the

play03:47

idea that any kind of AI safety concern

play03:49

necessitated Mr alman's replacement and

play03:52

that legal company found that the prior

play03:53

board's decision did not arise out of

play03:55

concerns regarding product safety or

play03:57

security the pace of development opening

play04:00

eyes finances or its statements to

play04:02

investors customers or business partners

play04:05

they say we regret that Miss toner

play04:06

continues to revisit the issues that

play04:08

were thoroughly examined by Wilmer hail

play04:10

LED review rather than moving forward m

play04:13

t has continued to make claims in the

play04:15

Press although perhaps difficult to

play04:16

remember now openi released chpt in

play04:18

November 2022 as a research project to

play04:21

learn more about how useful its models

play04:23

are in conversational settings it was

play04:25

built on GPT 3.5 an existing AA model

play04:28

which has already been available for

play04:30

more than 8 months at the time this is

play04:32

an important thing to understand that

play04:33

this technology was available everyone

play04:36

knew about it there were companies like

play04:38

for example Jarvis AI I think was called

play04:41

and then I think they changed their name

play04:42

to Jasper AI but it was basically for

play04:44

writing SEO optimized articles they were

play04:47

running on GPT 3.5 for a long long time

play04:50

if you look at open ai's YouTube channel

play04:53

this was 2 years ago so this was August

play04:55

10th

play04:57

2021 that seems like a lot more than two

play05:00

years ago so this is the open AI codex

play05:03

that ilas sover and Greg are presenting

play05:05

here which already resembles a back and

play05:07

forth kind of chat application but this

play05:09

one is for coding this was codex and

play05:11

they're inviting people to participate

play05:13

to use the API to use it for their own

play05:15

needs so there wasn't really anything

play05:18

new that was released Chad GPT was a

play05:21

little demo nobody thought it was going

play05:23

to blow up the way it did they took an

play05:25

already existing technology that

play05:27

hopefully everyone was aware of if

play05:29

you're in the board I hope you were

play05:30

aware what the company was working on

play05:33

and they just packaged it up in a chat

play05:34

format and they put out there for

play05:36

research purposes and it blew up again I

play05:39

don't think anyone quite expected that

play05:41

it would become the fastest growing

play05:44

apple of all time do you think that

play05:45

could have been possible to just predict

play05:47

that would happen I remember Elon Musk

play05:50

posting that it's scary good and I think

play05:52

that's what got me to try it initially

play05:55

so her saying that she was made aware of

play05:57

Chad PT on Twitter

play06:00

not exactly sure what that means either

play06:02

she just wasn't paying attention to what

play06:04

was happening in the company cuz again

play06:05

all those pieces of Technology were

play06:07

available they were available to the

play06:09

public there was the playground there

play06:10

was the API the only thing that changed

play06:13

was the user interface and that user

play06:16

interface just clicked with everybody it

play06:18

opened up everyone's eyes to what was

play06:21

possible so I think it's one of those

play06:22

things where she's seeing the truth but

play06:24

the reason people are reposting it

play06:25

because it sounds so much scarier like

play06:28

she had no idea this product was

play06:30

released it was already released it was

play06:32

just a different UI that was kind of put

play06:35

in place I think that's a fair thing to

play06:36

say because again the playground the web

play06:38

page where you can mess around with it

play06:39

that was already available API was

play06:41

already available so that's Point number

play06:43

one that she made that to me seems like

play06:46

a dubious claim it's either made to

play06:48

sound Sensational when it's it's really

play06:50

not I'm not sure but it just something

play06:52

about it feels a little bit fishy but

play06:54

let's continue she also said that Sam

play06:56

Alman was fired from a y combinator for

play06:58

his deceptive and chaotic Behavior So

play07:01

Yesterday tons of newspapers popped up

play07:03

with their stories about how Paul Graham

play07:06

fired Sam malman from y combinator so

play07:09

Graham who was something of a mentor to

play07:10

the young Tech Guru to Sam Alman flew

play07:13

from the United Kingdom to San Francisco

play07:14

to personally give his prote the boot to

play07:17

fire him here's the Washington Post

play07:20

albin's polarizing past hints at opening

play07:22

eyes board's reason for firing him so it

play07:24

says same thing that Graham flew from

play07:26

United Kingdom to San Francisco to give

play07:28

his prote the boot he fired Sam Alman

play07:32

before we continue can we agree that

play07:34

that's what these newspapers say right

play07:37

this the impression that they give you

play07:39

if you had to summarize it to somebody

play07:41

would that be an accurate summarization

play07:43

of what it says here I think so right

play07:46

here's the problem with that Paul Graham

play07:47

today commented on what actually

play07:49

happened he said I get tired of hearing

play07:52

that white commentator fired Sam so

play07:53

here's what actually happened for

play07:55

several years he was running both YC and

play07:57

openi but when open announced that it

play08:00

was going to have a for-profit

play08:02

subsidiary and that Sam was going to be

play08:03

the CEO we specifically Jessica told him

play08:06

that if he was going to work full-time

play08:08

on openi we should find someone else to

play08:10

run white combinator and he agreed if he

play08:13

said that he was going to find someone

play08:14

else to the be the CEO open thei so that

play08:17

he could focus 100% of white combinator

play08:19

we would be just fine with that too we

play08:21

didn't want him to leave just to choose

play08:23

one or the other now some people in the

play08:25

comments are trying to push this to say

play08:27

that well that's what firing is right

play08:30

that's the same thing and Paul responds

play08:32

no we would have been happy if he stayed

play08:34

and got someone else to run open AI

play08:36

can't you read I mean if you are okay

play08:39

with a person staying and running your

play08:41

company if the issue is you just don't

play08:42

want to have a split Focus or

play08:44

potentially conflict of interest you're

play08:45

saying hey either you're 100% here or

play08:47

100% there the fact that you're okay

play08:49

with them being 100% running your

play08:51

company that's not the same as firing

play08:54

that's not the same as you know as the

play08:56

Washington Post puts it giving him the

play08:59

boot how are you want to phrase that

play09:01

that's not what happened here so again

play09:03

that to me seems like another lie that

play09:05

everybody bought from Helen toner and

play09:08

again as we covered yesterday the issue

play09:11

here is that some of the organizations

play09:13

that she is somehow affiliated with do

play09:16

specifically tell the members you know

play09:18

here are our talking points here's what

play09:19

we actually believe right on one hand

play09:21

and on other hand here's our talking

play09:23

points for the normies for the people

play09:25

that might not agree with us here's a

play09:27

post by Nathan Lans saying Paul Graham

play09:29

says a story about Sam Alton being fired

play09:31

from White commentator is not true I

play09:33

think there are many cases like this

play09:34

where people are assuming things bad

play09:36

about Sam that probably aren't true I've

play09:38

never met Sam but I've only ever heard

play09:40

great things about him as a person most

play09:42

say he's one of the most genuinely nice

play09:44

and intelligent people they've ever met

play09:45

which is again before a lot of the stuff

play09:48

that was happening with open AI you

play09:50

would only hear nice things about Sam

play09:52

Alman by the way me personally I'm never

play09:55

surprised that some of these high

play09:57

charging people aren't quot unquote nice

play10:00

I think deep down to go that hard after

play10:03

some of these goals you kind of have to

play10:05

be a bit of a killer look at Steve Jobs

play10:08

right when his biography was released

play10:10

and we started learning about how he was

play10:12

and I mean there were rumors about

play10:14

beforehand but like he wasn't a very

play10:16

nice guy people didn't really love him

play10:19

all the time in fact some of the people

play10:21

that work closely with him afterwards

play10:22

said that while they really didn't like

play10:25

working with him because of how he was

play10:27

later on they reflected saying that

play10:29

during that time they were pushed harder

play10:31

and they accomplished more like they

play10:32

just extracted more from themselves they

play10:35

the the output was much better much

play10:37

higher level because of how he pushed

play10:39

them same thing with Elon Musk right the

play10:41

latest book that was released about him

play10:43

talks about demon mode this rage that he

play10:45

goes into to push certain projects

play10:47

through to put pressure on people Bill

play10:49

Gates was a nice guy we're always so

play10:51

surprised when these high charging

play10:54

people that achieve so much Against All

play10:56

Odds aren't super duper nice ell in the

play10:59

general

play10:59

remember when it came out that maybe she

play11:01

wasn't the greatest person to her staff

play11:03

or whatever and people were shocked but

play11:05

she dances so well she always does her a

play11:07

little happy dance how could she be a

play11:09

bad person so none of this is to say

play11:11

that any of this is defending Sam Alman

play11:14

necessarily not saying he's the saint

play11:17

and anybody against them are the bad

play11:18

guys and you're free to not like Sam

play11:21

Alman and disagree with how he's running

play11:23

things that's not the point of this

play11:24

video to dissuade you from that I'm just

play11:26

saying don't buy into everything that's

play11:29

said especially by people that might

play11:31

have very specific motives that they

play11:34

want to push through that might be hard

play11:35

to convince people to follow you if you

play11:38

actually save what your motives are

play11:40

right and we'll talk more about that in

play11:41

a second here's Rick Burton so I'm not

play11:44

familiar with this person so take this

play11:46

with a grain of salt but he's somebody

play11:48

that came out and spoke against Helen

play11:49

toner saying I lived in a community with

play11:51

Helen toner let me tell you what she is

play11:54

Helen is the very worst that Academia

play11:56

has to offer she thinks opinions matter

play11:58

more than actions while she was writing

play12:00

puff pieces about China Sam Alman was

play12:02

working she lucked into the open AI

play12:04

board and Stage a coupe Helen toner has

play12:06

destroyed value she has created nothing

play12:08

of value she's not open to open debate

play12:11

and now she's using her dying voice to

play12:13

hurt Sam Helen completely misunderstood

play12:15

what a board does from day one it is

play12:17

therefore quarterly oversight and acting

play12:19

as a check on the CEO she never gave a

play12:22

feedback to Sam she just tried to fire

play12:24

him this is not what a competent board

play12:27

does they work on the problems again as

play12:29

she talks about in the interview they

play12:30

assumed that Sam if he heard about the

play12:33

firing if if he got any word about it he

play12:35

would trying to counter acted somehow so

play12:38

they purposely set up it in such a way

play12:40

that he would not have any knowledge of

play12:43

it they didn't warn him they didn't talk

play12:45

to him they as far as I can tell never

play12:47

tried to work the problem out also again

play12:50

if a board there is for quarterly

play12:52

oversight then yeah they're probably not

play12:54

reporting to her every UI change or

play12:56

every release of an app she must have

play12:59

known about GPT 3.5 or gpt3 before it

play13:03

because it was there 8 to 10 months

play13:04

before the release of Chad GPT which

play13:06

again was more of a UI change I know it

play13:09

seems big to us now but the technology

play13:12

was all there it just the rapper that

play13:15

got released to the public was much more

play13:17

popular than anyone could have expected

play13:19

now one of the reasons that people are

play13:21

worried about people from a background

play13:23

in EA the effective altruist

play13:25

organization if they're somehow linked

play13:27

to it you know here for example a lot of

play13:30

people at the nist the staffers would

play13:32

revolt against expected appointment of

play13:34

an EA AI researcher the reason being is

play13:38

that a lot of the beliefs that these

play13:40

organizations have are probably not the

play13:42

beliefs that you and I share things they

play13:45

believe are things like that we're only

play13:48

months or years away from building an AI

play13:50

superintelligence able to outsmart the

play13:52

world's Collective efforts to control it

play13:55

right so we're potentially months away

play13:58

from having an AI super intelligence and

play14:01

what does that mean well according to

play14:02

Elijah yudkowsky if stopping malignant

play14:04

AI requires war between nuclear armed

play14:07

Nations that would be a price worth

play14:10

paying do you agree with that mentality

play14:12

like should we pause all AI indefinitely

play14:16

stop any progress try to control all the

play14:19

gpus so that nobody's able to research

play14:21

or or do any work with AI and then spy

play14:24

on other nations to make sure they're

play14:25

not doing it and if they are then

play14:28

nuclear war is an option to shut them

play14:31

down to prevent them from working on AI

play14:34

do you agree with that statement this is

play14:36

political.com by the way that I'm

play14:37

reading from it continues the profits of

play14:39

the AI apocalypse are boosted by an

play14:41

avalanche of tech dollars and also

play14:43

billions in crypto funds is we'll see in

play14:45

a second with much of it flowing through

play14:47

open philanthropy a major funer of

play14:49

effective altruist causes it's an epic

play14:52

infiltration said one biocurity

play14:53

researcher in Washington right and a lot

play14:55

of these EA people the members of this

play14:58

movement so they're usually white

play14:59

typically male and often hail from

play15:01

privileged backgrounds and like many of

play15:03

her peers conell calls EA a cult right

play15:06

Sam bman freed is part of that as some

play15:08

would say cult right he's convicted for

play15:10

stealing as much as 10 billion from his

play15:12

customers they literally believe that

play15:14

they're saving the world that's their

play15:17

mission these effective altruists truly

play15:19

believe what they're saying about AI

play15:21

safety the idea that within a few months

play15:24

or a few years it'll cause the

play15:26

extinction of the human race unless we

play15:29

stop all progress on it now and this is

play15:32

the problem with all of that and this is

play15:33

the thing that I hope more people

play15:35

understand is that they refer to

play15:37

themselves as being part of AI safety

play15:39

they've kind of hijacked that term

play15:41

instead of saying they're ai doomers ai

play15:44

apocalypse you know Terminator 2 AI is

play15:47

going to turn us all into paper clips

play15:49

they say AI safety which is a problem

play15:52

because as many longtime Ai and

play15:54

biocurity researchers in Washington say

play15:56

there's much more evidence backing up

play15:59

their less than existential AI concerns

play16:02

While most acknowledge the possibility

play16:03

that existential risks could one day

play16:06

emerge they say there's so far little

play16:08

evidence is there any evidence but

play16:10

they're saying there's little evidence

play16:12

to suggest that future AIS will cause

play16:14

such a destruction even when paired with

play16:17

biotechnology the point here is that we

play16:20

have much more pressing concerns than

play16:23

Terminator robots marching down a street

play16:26

yes we need to be cognizant of that yes

play16:28

we need to make sure we're not ignoring

play16:31

the risk of Rogue AI Etc but as AI is

play16:34

coming into the various software various

play16:36

businesses various government

play16:37

organizations we have very pressing very

play16:40

real concerns about how it's going to be

play16:43

used and these doomsday scenarios are

play16:45

corrupting that conversation they're

play16:47

leading us away from actual things that

play16:50

matter it's like imagine if you're

play16:51

trying to regulate Cars and auto safety

play16:55

things like seat belts making sure that

play16:57

you have enough lights you know crumple

play16:59

zones Etc like the safety features of

play17:02

automobiles but there was this other

play17:04

group that was slowly taking over

play17:06

Washington that actually defined Auto

play17:08

Safety car safety as no we have to get

play17:10

rid of all cars forever because cars are

play17:12

dangerous so while the normal reasonable

play17:15

people are trying to make cars safer the

play17:18

other people are saying we also want to

play17:19

make cars safer but their goal is

play17:21

actually just getting rid of all of it

play17:23

because they have this belief that one

play17:26

day cars will rise up and kill everybody

play17:28

or whatever you probably don't want

play17:30

those people making the regulations

play17:33

here's an example where Deborah Raji a

play17:35

Mozilla fellow in AI researcher at

play17:37

Berkeley you know her research focused

play17:40

on how AI can harm marginalized

play17:41

communities but that was completely

play17:43

overshadowed by open philanthropy funded

play17:46

researchers that suggested that llms

play17:48

like Chad GPT could supercharge the

play17:50

development of bioweapons and so Raji

play17:53

saying well if you just look online for

play17:55

a second you can find all that stuff on

play17:57

Google the fact that you can get the llm

play17:59

to regurgitate that stuff if you try

play18:01

hard enough there's nothing exceptional

play18:03

about it but her research is left in the

play18:05

dust because she doesn't have the funds

play18:07

that these other researchers have that

play18:09

are concerned about an AI doomsday

play18:11

scenario as EAS bring their message to

play18:14

virtually every corter of the nation's

play18:16

capital experts are warning that the

play18:17

tech funded flood is reshaping

play18:19

Washington's policy landscape driving

play18:22

researchers across many organizations to

play18:24

focus on the existential risks posed by

play18:26

new technologies often to the exclusion

play18:28

of other issues with firmer empirical

play18:30

grounding as another researcher puts it

play18:33

I don't want to call myself AI safety

play18:35

that word is Tainted now right they want

play18:38

to call themselves something different

play18:39

like system Safety Systems engineer

play18:42

right because saying AI safety now might

play18:44

link them to this cult-like organization

play18:47

concerned about things that again don't

play18:49

really have too much proof behind them

play18:52

by the way the founders of anthropic

play18:54

used to consider themselves part of EA

play18:56

now it seems like maybe they're kind of

play18:58

trying to distance themselves away from

play19:00

that movement here's stepen Pinker

play19:02

cognitive scientist at Harvard saying

play19:04

it's upsetting to read how the shift in

play19:06

EA from saving lives in Africa because

play19:09

again that's where they started trying

play19:10

to do good for the world for Humanity

play19:14

reduce poverty reduce suffering Etc so

play19:17

from saving lives in Africa to paying

play19:19

brainiacs to fret about how AI will turn

play19:21

us into paperclips may not have been

play19:23

Mission Drift But bait and switch EA's

play19:26

core ideas still sound and some some of

play19:28

its Charities are praiseworthy I hope

play19:31

the movement regains its bearings here's

play19:33

yakun saying as I have pointed out

play19:35

before Aeris is a kind of apocalyptic

play19:39

cult why would its most vocal Advocates

play19:41

come from Ultra religious families that

play19:43

they broke away from because of Science

play19:46

and the big concern with some of the

play19:48

people that believe in this stuff if

play19:50

they're allowed to regulate where we're

play19:52

going with AI their goals aren't to

play19:55

create Common Sense regulations like

play19:58

would most of us are talking about

play20:00

having some sort of visibility into

play20:01

who's building what having some sort of

play20:03

a reporting requirements some safety

play20:06

limits some safety testing red teaming

play20:08

efforts Etc right that's that they're

play20:11

saying that they want Common Sense

play20:13

regulations that's what they're telling

play20:15

you but what are they saying behind

play20:17

closed doors well here is Jan Talon

play20:20

future of Life institute's co-founder

play20:22

and one of the biggest ex risk

play20:24

billionaires here's the plan for

play20:26

regulatory interventions his hope for

play20:28

the foreseeable future is the following

play20:30

I do think that governments certainly

play20:33

governments can make things illegal well

play20:35

you can make Hardware illegal you can

play20:37

also say that yeah producing graphics

play20:40

cards above certain capability level is

play20:43

now illegal and suddenly you have like

play20:44

much much more Runway as a civilization

play20:47

do you get into a territory of having to

play20:52

put surveillance on what code is running

play20:55

in a data center yeah I mean regulating

play20:58

software is much much more harder than

play21:00

Hardware if you like let the more slow

play21:02

to continue then like the surveillance

play21:05

has to be more and more pervasive so my

play21:08

focus for the foreseeable future will be

play21:10

on kind of regulatory interventions and

play21:13

kind of like trying to educate lawmakers

play21:16

and kind of helping and perhaps hiring

play21:20

lobbyists to try to make the world safer

play21:23

again keep in mind they have billions of

play21:25

dollars for this billions so they're not

play21:28

talking about regulating in the way that

play21:31

you and I think of regulating they're

play21:33

talking about global ban on any sort of

play21:37

training runs take a listen nasty secret

play21:40

of AI uh field is the AI are not built

play21:43

they are grown the way you you build the

play21:46

Frontier Model build the Frontier Model

play21:48

is you take like two pages of code you

play21:52

put them in tens of thousands of

play21:54

graphics cards and let them hum for

play21:57

months

play21:59

and then you going have open up the hood

play22:01

and see like what creature brings out

play22:03

and what you can can you do with this

play22:04

creature so it's I think the regulate

play22:08

like indust the capacity to regulate

play22:11

things uh and kind of deal with various

play22:15

liability constraints Etc they apply to

play22:17

what happens after once once this

play22:19

creature has been kind of tamed uh and

play22:22

that's what what the uh fine-tuning and

play22:25

uh reinforcement learning from Human

play22:27

feedback Etc is doing and then

play22:29

productized then how do you deal with

play22:31

with these issues but uh is is this

play22:34

where we need the competence of of like

play22:36

other other Industries but like how can

play22:38

you avoid the system not escaping during

play22:40

training run this is this is like a

play22:43

completely novel issue for this species

play22:45

and we need need need some other

play22:47

approaches like just Banning those

play22:49

training grants the other thing I forgot

play22:51

to mention is Helen toner said that the

play22:53

reason everyone backed Sam mman because

play22:56

remember all the employees signed

play22:58

letters and tweeted those heart emojis

play23:01

saying that they wanted to stay at open

play23:02

AI to stay with Sam Alman even though

play23:05

apparently he was psychologically

play23:07

abusing everyone Howen T was saying the

play23:09

only reason they did that is they were

play23:10

afraid that open AI would be destroyed

play23:14

under her leadership or if she did what

play23:16

she was trying to do and then during the

play23:18

interview she said well that was not

play23:20

true it would never have been destroyed

play23:22

that was not on the table but the thing

play23:23

is that's not what she was saying while

play23:26

the whole thing was coming out she was

play23:28

saying the destruction of the company

play23:30

could be consistent with the board's

play23:32

Mission she was trying to either have

play23:34

anthropic absorb openi or destroy it

play23:37

which would probably again destroy a lot

play23:39

of value destroy a lot of these people

play23:42

their hard work the equity that they

play23:44

have in the business also if you recall

play23:46

when open AI researchers were fired for

play23:49

leaking information out of open AI well

play23:52

at least one of them had ties to that

play23:54

effective altruism movement so I do

play23:56

apologize for that rant I know we've

play23:59

covered some of the stuff before but

play24:01

seeing this being discussed other people

play24:03

in the space other YouTubers talking

play24:05

about AI I was personally a little bit

play24:08

concerned that a lot of people seem to

play24:09

be just taking what she's saying at face

play24:12

value again I'm more than happy to hear

play24:15

if she has any proof of anything if she

play24:17

has any specific things that we can look

play24:20

at and say okay it does look like this

play24:23

was correct so far it seems like an

play24:26

attempt to paint Sam Alman bad light

play24:28

which again is fine I'm not here to

play24:30

defend Sam you might not like him you

play24:33

might prefer that somebody else succeeds

play24:35

with AI maybe Elon Musk or perhaps

play24:39

Google or anthropic or I know a lot of

play24:42

us myself included we believe in open

play24:44

source we think open source AI is an

play24:47

important technology obviously comes

play24:49

with some risks as well but does have a

play24:52

lot of upside so for example open

play24:54

sourcing some of the cyber security

play24:56

technology or at least people getting

play24:58

together and sharing what they learn

play24:59

about cyber security helps everyone be

play25:02

more secure right if every company just

play25:04

kept what they knew to themselves

play25:06

everyone would be more exposed more

play25:08

vulnerable because they aren't sharing

play25:11

their knowledge with open source

play25:13

everyone can contribute to the knowledge

play25:15

potentially helping prevent

play25:17

vulnerabilities and stuff like that but

play25:20

it's important to understand that

play25:21

whatever your views on any of these

play25:23

companies are that some of the voices

play25:25

that you're hearing come from this small

play25:28

minority of people that want everything

play25:31

shut down that want some sort of a

play25:33

global surveillance surveillance on gpus

play25:36

limiting how advanced our chips can be

play25:39

hitting the pause button indefinitely

play25:41

and as you've seen even potentially

play25:43

going after other countries with nuclear

play25:46

weapons so two nuclear Powers duking it

play25:49

out to prevent the development of AI I

play25:53

don't know about you that seems a little

play25:55

bit crazy doesn't it we know we can

play25:58

destroy the Earth however many time

play26:00

dozens of times over with the nuclear

play26:02

weapons that we have takes a few bad

play26:05

decisions and everyone's gone but

play26:07

they're willing to flip the coin on that

play26:10

to make sure that you know AI doesn't

play26:12

get developed I don't think this is a

play26:14

very rational perspective this is very

play26:17

cultish this is from the blog of italic

play26:20

butan so he contributed close to a

play26:23

billion dollars to some of these causes

play26:25

so he's the person behind ethereum

play26:28

became very wealthy bought some Dogecoin

play26:31

or some doggy coin and when it did very

play26:34

well skyrocketed you know he decided to

play26:37

you know hey let's throw a billion to

play26:38

these guys but based on his writing and

play26:41

he kind of addressed this I don't think

play26:43

he necessarily is aligned with them he's

play26:46

not part of that AI Doomer mentality and

play26:50

here he kind of describes the world as

play26:52

he sees it and his first image here you

play26:54

have the anti- technology view right if

play26:57

we move forward forward the future is

play26:59

dark scary it's bad technological

play27:02

progress is bad and safety is behind us

play27:06

so dystopia ahead safety behind we need

play27:09

to stop progress we need to unwind the

play27:11

technology the AI Etc and then there's

play27:13

the accelerationist view that there's

play27:15

dangers behind and Utopia ahead so the

play27:18

idea is that technological progress will

play27:20

help improve the world Humanity Etc and

play27:23

if we don't we do have dangers behind

play27:26

and so I think the EA seemingly adopts

play27:29

this kind of anti- Technology view or at

play27:31

least anti- Ai and of course we have the

play27:34

accelerationists saying you know full

play27:36

steam ahead let's go in his view he's

play27:38

describing more as yes there are dangers

play27:40

behind so we have to progress but we

play27:43

also have multiple paths forward ahead

play27:45

some good some bad which I think is very

play27:47

reasonable so with that said my whole

play27:49

point in this is don't just approach

play27:52

this from a stance of is open AI good or

play27:55

bad is Sam Alman good or bad should we

play27:58

have ai safety I think most people would

play28:00

agree AI should be deployed safely

play28:02

there's a lot of different bad things

play28:04

that can happen we need to be very

play28:06

careful how we approach it during the

play28:08

development of the nuclear bomb someone

play28:10

suggested there's a chance that the

play28:12

explosion will trigger a chain reaction

play28:15

of the entire atmosphere basically

play28:17

getting set on fire right a fiery death

play28:20

for the entire world right and that

play28:22

triggered an investigation into it

play28:25

people looked into they try to figure

play28:27

out what what's the chance of that

play28:28

happening they logically look at that

play28:31

potential X risk smart people who knew

play28:34

what they were talking about who had

play28:36

field expertise who had Education and

play28:39

Training in that particular field

play28:41

studied that question and came up with

play28:43

Solutions that's how it should be we do

play28:47

the same thing when we roll out any

play28:49

technology new cars new drugs new

play28:52

electronic devices new kids toys there's

play28:55

laws and regulations and how we reduce

play28:57

those

play28:58

that's very reasonable no one's here is

play29:01

against that but if there are people out

play29:03

there that are convinced that we need to

play29:04

start nuking other nations out of the

play29:07

face of the Earth if they're developing

play29:09

AI they think that we should monitor all

play29:11

Hardware all gpus all software to ban

play29:15

training runs to create a global

play29:17

surveillance system to make sure none of

play29:19

that happens I hope you agree with me

play29:21

that maybe just maybe we shouldn't let

play29:25

those people govern and regulate the

play29:28

world right that's that's not so crazy I

play29:30

hope you agree with that said my name is

play29:32

Wes Rob and thank you for watching

Rate This

5.0 / 5 (0 votes)

Related Tags
AI ControversySam AltmanOpenAITech RegulationAI SafetyEffective AltruismTech DebateEthical ConcernsInnovation EthicsFuture Tech