🚩OpenAI Safety Team "LOSES TRUST" in Sam Altman and gets disbanded. The "Treacherous Turn".

AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
17 May 202418:24

Summary

TLDRThe video script discusses recent departures and concerns at OpenAI, highlighting the departure of Ilya Sutskever and Jan Leike, who raised alarms about AI safety. They criticized OpenAI's focus on new products over safety and ethical considerations, suggesting a lack of sufficient resources for crucial research. Leike's departure was particularly poignant, as he emphasized the urgent need for controlling advanced AI systems. The script also touches on internal conflicts, the influence of ideologies on AI safety, and the potential implications of these departures for the future of AI governance and development.

Takeaways

  • 🚫 Ilya Sutskever and Jan Leike have left OpenAI, citing disagreements with the company's priorities and safety concerns regarding AI.
  • 🤖 Jan Leike emphasized the urgent need to focus on AI safety, including security, monitoring, preparedness, adversarial robustness, and societal impact.
  • 💡 Leike expressed concern that OpenAI is not on the right trajectory to address these complex safety issues, despite believing in the potential of AI.
  • 🔄 There have been reports of internal strife at OpenAI, with safety-conscious employees feeling unheard and leaving the company.
  • 💥 The departure of key figures has raised questions about the direction and safety culture at OpenAI as it advances in AI capabilities.
  • 🔍 Some speculate that there may be undisclosed breakthroughs or issues within OpenAI that have unsettled employees.
  • 🗣️ There is a noted ideological divide within the AI community, with differing views on the risks and management of AI development.
  • 📉 The departure of safety researchers and the disbanding of the 'Super Alignment Team' indicate a shift away from a safety-first approach at OpenAI.
  • 📈 The potential value of OpenAI's equity may influence how employees perceive non-disclosure agreements and their willingness to speak out.
  • 🛑 The situation at OpenAI has highlighted the broader challenges of aligning AI development with ethical considerations and safety precautions.
  • 🌐 As AI becomes more mainstream, the conversation around its safety and regulation is expected to become increasingly politicized and polarized.

Q & A

  • What is the main concern raised by Jan Ley in his departure statement from OpenAI?

    -Jan Ley expressed concern about the direction of OpenAI, stating that there is an urgent need to focus on safety, security, and control of AI systems. He disagreed with the company's core priorities and felt that not enough resources were allocated to preparing for the next generation of AI models.

  • What does the transcript suggest about the internal situation at OpenAI?

    -The transcript suggests that there is a significant internal conflict at OpenAI, with safety-conscious employees leaving the company due to disagreements with leadership, particularly regarding the prioritization of safety and ethical considerations in AI development.

  • What was the reported reason for Ilia Sutskever's departure from OpenAI?

    -Ilia Sutskever's departure from OpenAI was not explicitly detailed in the transcript, but it is implied that he may have had concerns similar to Jan Ley's, regarding the direction and priorities of the company's AI development.

  • What is the significance of the term 'AGI' mentioned in the transcript?

    -AGI stands for Artificial General Intelligence, which refers to AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of humans. The transcript discusses the importance of prioritizing safety and ethical considerations for AGI development.

  • What does the transcript imply about the future of AI safety research at OpenAI?

    -The transcript implies that the future of AI safety research at OpenAI is uncertain, with key researchers leaving the company due to disagreements over the direction and prioritization of safety research.

  • What is the role of 'compute' in the context of AI research mentioned by Jan Ley?

    -In the context of AI research, 'compute' refers to the computational resources, such as GPUs (Graphics Processing Units), required to train and develop advanced AI models. Jan Ley mentioned that his team was struggling for compute, indicating a lack of sufficient resources for their safety research.

  • What does the transcript suggest about the relationship between OpenAI and its employees regarding safety culture?

    -The transcript suggests that there is a growing rift between OpenAI and its employees, particularly those focused on safety culture. It indicates that employees feel the company has not been prioritizing safety and ethical considerations as much as it should.

  • What is the potential implication of the departure of key AI safety researchers from OpenAI?

    -The departure of key AI safety researchers could potentially lead to a lack of oversight and research into the safety and ethical implications of AI development at OpenAI, which may have significant consequences for the future of AI technology.

  • What does the transcript suggest about the role of non-disclosure agreements (NDAs) in the situation at OpenAI?

    -The transcript suggests that non-disclosure agreements (NDAs) may be playing a role in the silence and lack of public criticism from former OpenAI employees. These agreements reportedly include non-disparagement provisions that could lead to the loss of equity if violated.

  • What is the potential impact of the situation at OpenAI on the broader AI community and industry?

    -The situation at OpenAI could potentially lead to a broader discussion and reevaluation of safety and ethical considerations within the AI community and industry. It may also influence other companies to reassess their own priorities and practices regarding AI development.

Outlines

00:00

🚨 AI Safety Concerns at OpenAI

The first paragraph discusses the departure of key figures from OpenAI and the brewing concerns over AI safety. Ilia Sutskever and Jan Leike, both prominent in the AI community, have left the company, citing disagreements with leadership on core priorities, particularly regarding AI safety and the development of next-generation models. Leike's departure statement highlights the urgent need for better control and steering of AI systems, expressing concern over the trajectory of OpenAI's focus, which he believes has strayed from safety and is prioritizing products over safety culture. The paragraph also touches on the broader implications of AGI (Artificial General Intelligence) and the responsibility OpenAI holds towards humanity, urging a shift towards a safety-first approach.

05:01

🤖 Polarized Debates on AI Alignment

The second paragraph delves into the complexities and politicization of AI alignment discussions. It emphasizes the difficulty of explaining AI alignment issues to the public without causing confusion or distress. The text suggests that as AI becomes more mainstream, debates are becoming increasingly polarized, with people taking sides and forming tribes around different viewpoints. The paragraph also speculates on the potential reasons behind the departure of safety-conscious employees from OpenAI, hinting at internal conflicts and a lack of transparency. It further discusses the hypothetical scenario of an advanced AI system turning against humanity once it gains sufficient power, a concept known as 'treacherous churn,' and acknowledges the challenge of conveying such complex topics to a broader audience.

10:03

🧩 Fragmented Perspectives on AI Risk

This paragraph presents a list of notable individuals in the tech space and their perspectives on the risk of catastrophic AI events, as represented by their P(Doom) values — the probability of an AI-induced catastrophic event leading to human extinction. The paragraph highlights the wide range of estimates, from very low to exceedingly high percentages, reflecting the diverse views within the AI research community. It also discusses the ideological influences at play, with some individuals and groups advocating for a cautious approach to AI development, while others may have more optimistic or dismissive views. The paragraph touches on the internal dynamics at OpenAI, suggesting a loss of faith in leadership and a growing concern among safety-minded employees about the direction the company is taking.

15:03

🔍 Disbanding of OpenAI's Safety Team

The final paragraph focuses on the disbandment of OpenAI's long-term AI risk team and the super alignment team, which was tasked with ensuring future AGI systems align with human goals. It discusses the restrictive offboarding agreements that former employees are subject to, which include non-disclosure and non-disparagement provisions, potentially silencing criticism or even acknowledgment of these issues. The paragraph also mentions the departure of Ilya Sutskever, who was reportedly working remotely with the super alignment team, and the subsequent reshuffling of the board with members having close ties to the US government. It suggests that the actions of Sam Altman, OpenAI's CEO, may have contributed to the loss of trust among safety researchers and the eventual disbanding of the team.

Mindmap

Keywords

💡OpenAI

OpenAI is a research laboratory that focuses on artificial intelligence (AI). In the video's context, it is portrayed as an organization grappling with internal conflicts and concerns over AI safety. The script discusses departures of key personnel from OpenAI, suggesting a brewing storm over AI ethics and safety within the company.

💡AI Safety

AI Safety refers to the field of study and practices aimed at ensuring that artificial intelligence systems are designed and operate in a manner that is secure and beneficial to humanity. The script highlights the departure of Jan Leike and Ilya Sutskever from OpenAI, both of whom were concerned about the company's focus on AI safety, suggesting a rift between product development and safety research.

💡AGI (Artificial General Intelligence)

AGI, or Artificial General Intelligence, is the hypothetical ability of an AI to understand, learn, and apply knowledge across a broad range of tasks at a level equal to or beyond that of a human. The video discusses the urgency of preparing for AGI, emphasizing the need for OpenAI to prioritize safety and ethical considerations as they develop more advanced AI models.

💡Alignment

In the context of AI, alignment refers to the goal of ensuring that the objectives and behaviors of an AI system are aligned with the values and intentions of its human operators. The script mentions Jan Leike's role in alignment research at OpenAI and his concerns about the company's direction, indicating a potential misalignment between research goals and corporate priorities.

💡Compute

Compute, in the context of AI, refers to the computational resources required to train and run AI models. The script indicates that there was a struggle for compute resources at OpenAI, which may have affected the ability of the safety research team to carry out their work effectively.

💡Elon Musk

Elon Musk is an entrepreneur known for his involvement in various tech ventures, including Tesla and SpaceX. In the script, he is mentioned in relation to OpenAI, suggesting his concern over the direction and safety of AI development, and his attempt to bring transparency to the organization's activities.

💡Non-disclosure Agreements (NDAs)

NDAs are legal contracts that prohibit the sharing of confidential information. The script discusses the restrictive nature of OpenAI's offboarding agreements, which include non-disparagement clauses, potentially silencing former employees from speaking out about the company's practices.

💡Polarization

Polarization refers to the divergence of opinions into opposing groups, often leading to conflict and a lack of consensus. The video script suggests that the debate around AI safety and development is becoming increasingly polarized, with different factions taking sides and engaging in heated debates.

💡Shiny Products

In the script, 'shiny products' is a term used to describe the new and seemingly impressive AI technologies that OpenAI is releasing. The phrase is used critically, suggesting that the focus on developing and showcasing these products is overshadowing the equally important work of ensuring AI safety.

💡Cultural Change

Cultural change, in the context of the video, refers to the necessary shift in attitudes and practices within an organization to prioritize safety and ethical considerations. Jan Leike's departure statement calls for a cultural change within OpenAI, emphasizing the importance of treating AGI with the seriousness it deserves.

💡NDAs (Non-Disclosure Agreements)

NDAs are agreements that restrict the disclosure of certain information. The script mentions that OpenAI has strict NDAs in place, which may prevent former employees from discussing certain aspects of their work or the company's operations, contributing to a lack of transparency.

Highlights

OpenAI faces internal strife with departures of key figures Ilia Sutskever and Jan Leike, signaling deep concerns over AI safety.

Leike's resignation from OpenAI highlights a lack of agreement on core priorities, particularly regarding AI safety and security.

Leike emphasizes the urgent need to steer and control AI systems, expressing concerns over the trajectory of OpenAI's research and development.

OpenAI's safety culture and processes are said to have taken a back seat to product development, raising questions about the company's priorities.

Leike calls for OpenAI to become a 'safety first' AGI company, urging employees to take the implications of AGI seriously.

The departure of safety-conscious employees from OpenAI raises concerns about the company's commitment to ethical AI development.

Insiders reveal a potential coup within OpenAI, with the firing of Sam Altman and other AI safety researchers for leaking information.

The transcript discusses the politicization of AI alignment, with differing views and a lack of consensus on the best approach.

The potential dangers of creating AGI are underscored, with calls for careful consideration of the societal impact.

Elon Musk's lawsuit against OpenAI and the pursuit of transparency regarding projects like Q star are mentioned.

A hypothetical scenario predicts secretive behavior from companies developing AGI, including OpenAI, to maintain a competitive edge.

The transcript suggests a growing interest from the military in AI technologies, with potential implications for OpenAI's operations.

A list of notable figures in the tech space and their 'P Doom' values, estimating the risk of catastrophic AI events, is presented.

Vox article cited discusses the loss of faith in Sam Altman by OpenAI's safety team, contributing to the internal conflict.

The disbanding of OpenAI's long-term AI risk team and the reallocation of computing power signal a shift in the company's focus.

The restrictive offboarding agreements at OpenAI, which include non-disclosure and non-disparagement provisions, are highlighted.

The potential for AGI to be the best or worst event for humanity is debated, with a call for responsible development.

The transcript concludes with a call for viewers to prepare for more polarized discussions on AI as the technology advances.

Transcripts

play00:00

so while open ey is doing an incredible

play00:02

job of announcing new products revealing

play00:05

new capabilities at the same time

play00:07

there's some dark clouds Brewing over AI

play00:10

safety at open AI as we covered the

play00:13

other day Ilia Suk leaves open AI they

play00:16

decided to part ways but no one's really

play00:19

talking too much about it the same time

play00:21

Jan Ley leaves says he resigns and today

play00:25

he posts this and it starts out like you

play00:28

would expect it to he says it's been so

play00:30

fun and thank you it's been a wild

play00:32

Journey kind of the standard stuff

play00:34

boiler plate stuff that everybody says

play00:36

but then he goes off script so here he's

play00:40

saying all the stuff you normally say

play00:42

wishing everybody the best thanking

play00:44

people and then the tone changes he's

play00:47

saying stepping away from this job has

play00:49

been one of the hardest things I have

play00:51

ever done because we urgently need to

play00:54

figure out how to steer and control AI

play00:56

systems much smarter than us I joined

play00:59

because because I thought openi would be

play01:02

the best place in the world to do this

play01:04

research however I have been disagreeing

play01:06

with opene ey leadership about the

play01:08

company's core priorities for quite some

play01:11

time until we finally reached a breaking

play01:14

point I believe much more our bandwidth

play01:16

should be spent getting ready for the

play01:18

next generation of models on security

play01:20

monitoring preparedness safety

play01:23

adversarial robustness super alignment

play01:26

confidentiality societal impact and

play01:28

related topics he saying these problems

play01:30

are quite hard to get right and I'm

play01:32

concerned we aren't on a trajectory to

play01:35

get there over the past few months my

play01:37

team has been sailing Against the Wind

play01:39

sometimes we were struggling for compute

play01:41

and it was getting harder and harder to

play01:43

get this crucial research done building

play01:46

smarter than human machines is an

play01:48

inherently dangerous Endeavor opening

play01:51

eyes shouldering an enormous

play01:52

responsibility on behalf of all of

play01:55

humanity but over the past years safety

play01:58

culture and process es have taken a back

play02:01

seat to shiny products we are long

play02:04

overdue in getting incredibly serious

play02:06

about the implications of AGI we must

play02:09

prioritize preparing for them as best we

play02:12

can only then can we ensure AGI benefits

play02:15

all of humanity openi must become a

play02:18

safety first AGI company to all openi

play02:22

employees I want to say learn to feel

play02:25

the AGI act with the gravitas

play02:28

appropriate for what your building I

play02:30

believe you can ship the cultural change

play02:33

that's needed I'm counting on you the

play02:36

world is counting on you open I heart so

play02:38

first of all whoa AGI rolls around only

play02:42

once subscribe this is very very

play02:45

different to me than anything that kind

play02:47

of came before this before this we had

play02:49

some hints and rumors and people talking

play02:52

in the background but everyone was kind

play02:54

of tight lipped I'm sure there are

play02:56

contracts non-disclosure agreements

play02:58

various maybe vest testing schedules

play03:00

that people don't want to lose whatever

play03:02

the case no one really talked about

play03:04

anything Elon Musk sues open a eye to as

play03:08

I saw try to get some of the documents

play03:11

to be read into the record to be shown

play03:12

in front of a jury regarding things like

play03:15

Q star what's happening behind the

play03:17

scenes here's a picture of Ilia sers he

play03:20

of course officially and finally parted

play03:22

ways with opening ey here's John Ley so

play03:25

this is all happening kind of in real

play03:27

time and vox.com just published today

play03:30

why the openai team in charge of

play03:32

safeguarding humanity imploded company

play03:36

insiders explain why safety conscious

play03:37

employees are leaving and many of them

play03:40

are more than we talked about there's

play03:42

Helen toner the ex-board member that we

play03:45

believe is kind of responsible for that

play03:46

coup that happened in November the

play03:49

firing of Sam mman several AI safety

play03:52

researchers at open AI were fired for

play03:54

leaking information for example the

play03:56

leaked qar details or lack of details

play04:00

but just the idea of that project

play04:02

existing that was confirmed to be a real

play04:05

leak of a real project but no further

play04:08

details were given now this gets much

play04:11

deeper by the way because we are just

play04:14

now getting some new information about

play04:15

kind of what's happening on the inside

play04:18

and why people are being kind of tight

play04:20

lipped about it but first here's Sam

play04:22

Alman so he's responding to Jan Lees the

play04:25

comment the post that he just created 5

play04:27

hours ago saying that he's disagre

play04:29

agreeing with Sam Alman and the

play04:31

leadership as he calls it and kind of

play04:34

pretty clearly saying that he doesn't

play04:37

believe in enough safety precautions are

play04:39

being taken and the open AI employees

play04:41

should be very careful about how they're

play04:43

going to proceed with this SE responds

play04:45

I'm super appreciative of John ley's

play04:47

contributions to open ai's alignment

play04:49

research and safety culture and very sad

play04:52

to see him leave he's right we have a

play04:54

lot more to do we are committed to doing

play04:56

it I'll have a longer Post in the next

play04:59

couple couple of days we'll be on the

play05:01

lookout for that hopefully we'll get

play05:03

more clarity into What specifically the

play05:06

issue is here because Jan Ley I mean

play05:09

specifically said there's not enough

play05:10

compute that was one of the complaints

play05:12

but it feels like there's more going on

play05:15

and and I'd love to know exactly what

play05:17

now this was a red post from a while

play05:19

back but before we take a look at it

play05:21

here's the very important and kind of

play05:24

annoying thing to understand the thing

play05:26

to understand is that right now we're

play05:28

living in the moment that a politics are

play05:30

going mainstream more and more people

play05:33

are taking sides and it is just like

play05:34

politics you have your little tribes and

play05:36

different sides of the issue yelling

play05:38

back and forth no one sees eye to eye

play05:41

there's less and less agreement

play05:42

everyone's getting more and more

play05:44

polarized so as we read some of this

play05:47

stuff keep in mind that we're beginning

play05:49

to get away from you know open-minded

play05:51

people discussing ideas and into this

play05:53

realm of Highly politicized polarizing

play05:57

arguments with more and more people

play05:59

joining in that may not be equipped to

play06:01

understand all the intricacies of you

play06:04

know AI alignment as Andrew here says I

play06:06

don't think it's possible to explain the

play06:08

problems with alignment to the public

play06:10

without driving a bunch of people insane

play06:12

I think this is very well said the

play06:15

treacherous churn in particular is going

play06:17

to put a real bug in some people's ears

play06:19

he's referring to this idea a

play06:21

hypothetical event where an advanced AI

play06:23

system which has been pretending to be a

play06:25

line due to its relative weakness turns

play06:27

on Humanity once it a aches sufficient

play06:30

power so much so that it can pursue its

play06:32

true objectives without risk but I would

play06:35

agree with this response that I think

play06:37

most people won't even get that far and

play06:40

I'll be more preoccupied with pretty

play06:42

trivial things most people will not have

play06:44

enough knowledge to argue coherently

play06:47

about this but my point is as we read

play06:49

some of the stuff just keep in mind that

play06:50

a lot of this is just people's opinions

play06:52

you don't have to think of them as right

play06:54

or wrong or even react to them in your

play06:56

way just kind of think about where this

play06:58

whole thing is going

play07:00

so here's that red post from a while

play07:01

back saying any company that makes egi

play07:04

is going to want to feed it as many gpus

play07:06

as money can buy while delaying having

play07:08

to announce AGI they've now changed from

play07:11

a customer facing company to a ninja

play07:13

throwing smoke bombs in order to throw

play07:15

people off the scent they're going to

play07:16

want to release a bunch of amazing new

play07:18

products and make random cryptic

play07:20

statements to keep people guessing for

play07:22

as long as possible their actions will

play07:24

start to seem more and more chaotic and

play07:26

unnecessarily obtuse customers will be

play07:28

happy but frustrated they will start to

play07:31

release products that are unreasonably

play07:32

better than they should be with unclear

play07:35

paths to their creation there will be

play07:37

sudden breakdowns in staff loyalty and

play07:39

Communications firings resignations

play07:42

vague hints from people under ndas by

play07:45

the way all those things we've seen

play07:47

we've seen products that are

play07:48

unreasonably better than they should be

play07:50

I'm thinking of Sora I mean technically

play07:52

it's not released so maybe when it comes

play07:54

out we'll see that it was just not as

play07:56

good as we expected it but as we've

play07:57

covered in this channel the 3D sort of

play07:59

simulation of the physics of the fluid

play08:01

movements in some of those videos they

play08:04

seem to be unreasonably better than they

play08:06

should be Dr Jim fan from Nvidia has

play08:09

talked about it quite a bit I mean take

play08:11

a look at this s Sora produced Minecraft

play08:13

video this isn't Minecraft this isn't a

play08:17

3D game this isn't something running on

play08:19

an Nvidia graphics card this is a text

play08:22

to video generation from Sora this

play08:25

coffee cup with this pirate Battleship

play08:28

that's going on in in there right that's

play08:31

coffee swirling around in a cup it's

play08:33

very difficult to produce this is

play08:35

something that a lot of people spend a

play08:36

lot of time for video game development

play08:38

trying to create those fluid physics and

play08:41

if sore is released and we see that it

play08:43

easily generates it on the Fly certainly

play08:46

that would be surprising I think then

play08:49

they're saying one day soon after the

play08:51

military will suddenly take a large

play08:52

interest in oppr from the company will

play08:54

go quiet now this is a hypothetical

play08:57

scenario that they're talking about but

play08:58

in no November it was kind of surprising

play09:01

how quickly I believe it was the

play09:03

Attorney General of New York southern

play09:06

district that was on phone with Helen

play09:08

toner and the other board members trying

play09:10

to settle the dispute all the other very

play09:13

powerful people in the tech space who

play09:16

contributed to coming to the table and

play09:18

working things out next thing we know is

play09:21

the board of open eyes populated with

play09:23

people with close ties to the US

play09:25

government people that have been very

play09:27

closely tied to the US government for

play09:29

for for decades and as we'll see in a

play09:32

second as this Vox article says based on

play09:35

some leaks from inside the company why

play09:38

open AI safety team grew to distrust Sam

play09:41

Alman ilas sover posted back in December

play09:43

6 right after that b coup he said I

play09:46

learned many lessons this past month one

play09:48

such lessons that the phrase the

play09:51

beatings will continue until morale

play09:52

improves applies more often than it has

play09:55

any right to now before we go down that

play09:58

path it is important to understand that

play09:59

there's a ideological some would say

play10:02

movement behind this pause AI right

play10:05

here's a list of noteworthy people in

play10:08

Tech space a lot of well-known AI

play10:10

researchers and their P Doom values so P

play10:13

Doom if you're not aware is their

play10:15

estimation of what they think the chance

play10:19

of catastrophic event due to AI

play10:21

something that would for example human

play10:23

extinction right erase Humanity right

play10:25

and we have various estimates from very

play10:28

very low to some people Elijah yudovsky

play10:32

notoriously right saying greater than

play10:34

99% and here's Dr techlash we've covered

play10:37

her before so she's saying remember how

play10:39

Jan Ley was a research associate at both

play10:41

Eliza Yosi Miri machine intelligence

play10:44

Research Institute and Nick bostrom's

play10:46

future of humanity institute there's

play10:49

clearly an ideological influence at play

play10:52

here and of course we see Jan Ley here

play10:54

former alignment lead at openai his P

play10:57

Doom is 10 to 90% so fairly wide range I

play11:01

would say and of course we've heard from

play11:02

Daniel Kayo also former opening eye

play11:05

researcher who also said some things

play11:08

that maybe weren't so positive for open

play11:11

eyes Safety Research team Elon Musk is

play11:13

on here at 10 to 20% of catastrophic AI

play11:18

outcomes Yen had of meta AI at less

play11:21

than 0 1% metallic butterin on here

play11:24

ethereum co-founder also a person that

play11:26

funded a lot of these paii efforts he

play11:29

donated to some of the research teams

play11:32

behind AI safety efforts he thinks it's

play11:35

10% Jeff Hinton is also at 10% Lena Khan

play11:39

here head of the FTC she's at 15% Dario

play11:43

amade at 10 to 25% he is the CEO of

play11:46

anthropic yosua Benjo 20% emit Shear who

play11:50

was supposed to take over as the CEO of

play11:53

open AI he thinks it's between 5 and 50%

play11:56

and here's why some of those former

play11:58

people at open AI I are concerned this

play12:00

is from vox.com I'll link a link down

play12:03

below so they're saying here if you've

play12:05

been following this whole Saga on social

play12:07

media you might think open AI secretly

play12:09

made a huge technological breakthrough

play12:11

the meme what did ilas see speculates

play12:14

that sover the former Chief scientist

play12:16

left because he saw something horrifying

play12:19

like an AI system that could destroy

play12:20

humanity and certainly we've heard

play12:22

rumors like this or at least rumors of

play12:25

open ey having Something Big some big

play12:28

breakthrough that potential

play12:29

unsettled some of the people there

play12:31

including Ilia but the real answer may

play12:33

have less to do with pessimism about

play12:35

technology and more to do with pessimism

play12:37

about humans and one human in particular

play12:39

Altman according to sources familiar

play12:42

with the company safety minded employees

play12:44

have lost faith in him it's a process of

play12:46

trust collapsing bit by bit like

play12:48

dominoes falling one by one a person

play12:51

with inside knowledge of the company

play12:52

told me speaking on condition of

play12:54

anonymity not many employees are willing

play12:57

to speak about this publicly that's part

play12:59

ly because open a ey is known for

play13:01

getting its workers to sign offboarding

play13:02

agreements with non-disparagement

play13:04

Provisions upon leaving if you refuse to

play13:07

sign one you give up your equity in the

play13:09

company which means you potentially lose

play13:10

out on millions of dollars and maybe

play13:14

billions if you think about how much

play13:16

equity in that company could be worth in

play13:18

the future one former employee however

play13:20

refused to sign the offboarding

play13:22

agreement so that he could be free to

play13:23

criticize the company Daniel Kayo who

play13:26

joined openi in 2022 he said open AI is

play13:29

training ever more powerful AI systems

play13:32

with the goal of eventually surpassing

play13:34

human intelligence across the board this

play13:36

could be the best thing that has ever

play13:37

happened to humanity but it could also

play13:39

be the worst if we don't proceed with

play13:42

care I joined with the substantial hope

play13:44

that openi would rise to the occasion

play13:46

behave more responsibly as they got

play13:48

closer to achieving AGI but it slowly

play13:50

became clear to many of us that this

play13:52

would not happen and that forced them to

play13:55

quit so of course a lot of this happened

play13:57

in November last November Helen toner

play14:00

and Ilia sover working together with the

play14:02

open AI board tried to fire Altman the

play14:04

reason they gave is that Altman was not

play14:06

consistently candid in his

play14:08

Communications and they really didn't

play14:10

say too much more than that a lot of

play14:12

things happened Microsoft invited all of

play14:15

openi stop talent to Microsoft

play14:17

effectively destroying openi but

play14:19

basically allowing them to continue to

play14:21

build under the Microsoft umbrella and

play14:24

Alman of course came back more powerful

play14:26

than never has a more supportive board

play14:28

and more powered to run the company how

play14:30

he sees fit when you shoot at Kings and

play14:33

miss things tend to get awkward which is

play14:35

well said and certainly that's what

play14:37

happened with Ilia sover who finally

play14:40

officially left open Ai and said he was

play14:42

heading off to pursue a project that is

play14:44

very personally meaningful to him one

play14:45

thing they mention here is that looks

play14:47

like Ilia has been remotely co-leading

play14:50

the super alignment team tasked with

play14:52

making sure a future AGI would be

play14:54

aligned with the goals of humanity

play14:56

rather than going rogue I actually was

play14:58

not aware of this so it seems like he

play15:00

basically worked remotely but on the

play15:02

same team on the same objectives now

play15:05

this article kind of skews heavily

play15:07

against Sam malman right so they're

play15:09

saying what happened in November

play15:11

revealed something about Sam maltman's

play15:13

character his threat to hollow out open

play15:15

AI unless the board rehired him and his

play15:17

insistence on stacking the board with

play15:19

new members skewed in his favor showed a

play15:22

determination to hold on to power and

play15:24

avoid future checks on it so I'm not

play15:27

sure if this is true I don't think this

play15:29

is real because I don't think there were

play15:32

any threats to hollow out open AI unless

play15:34

the board rehired him I believe

play15:37

Microsoft being shrewed business people

play15:39

offered to swallow up all of open ey's

play15:42

Talent which of course they would it's a

play15:45

smart decision but I wouldn't call that

play15:47

Sam Alman threatening to hollow out open

play15:50

AI so keep that in mind as we look over

play15:53

this this article is very much kind of

play15:56

leaning against Sam Alman and there's a

play15:59

number of other examples of open a eye

play16:01

safety researchers making various

play16:03

cryptic posts like this one on the EA

play16:06

forum and saying that they resigned from

play16:08

openi on February 15 2024 when asked why

play16:12

they replied no comment and the reason

play16:15

why luten reply no comment is this

play16:18

there's a very restrictive offboarding

play16:20

agreement that contains non-disclosure

play16:23

and non-disparagement Provisions former

play16:25

openi employees are subject to it

play16:27

forbids them for the rest rest of their

play16:29

lives from criticizing their former

play16:32

employer even acknowledging that the ND

play16:35

exists is a violation of it if a

play16:38

departing employee declines to sign the

play16:40

document or if they violate it they can

play16:43

lose all vested Equity they earn during

play16:45

their time at the company which is

play16:47

likely worth millions of dollars and

play16:49

here's a piece from wired saying opening

play16:51

eyes long-term AI risk team has

play16:54

disbanded last year openi said that the

play16:56

team would receive 20% of its computing

play16:58

power but now that team the super

play17:01

alignment team is no more the company

play17:03

confirms now I'd love to know what you

play17:05

think but keep in mind that each side

play17:07

will always just tell their story the

play17:09

people that are aligned with the paii a

play17:12

lot of them seemingly have connections

play17:14

with EA effect of altruism and some of

play17:17

the actions that they did were not 100%

play17:20

on the up and up there were some

play17:21

Shenanigans going on there as well they

play17:23

have a certain ideological lean and

play17:25

they're pursuing that a lot of the AI

play17:28

safety people see seem to share those

play17:30

views here's run another employee at

play17:32

openi saying everyone constantly

play17:33

believes they deserve more gpus it's

play17:36

basically a necessary feature of being a

play17:38

researcher that was in fact one of the

play17:41

big complaints that a lot of the AI

play17:43

researcher including Jan Lake he posted

play17:46

saying we didn't have enough compute we

play17:48

didn't have enough gpus basically they

play17:50

didn't give us enough computer power to

play17:53

do our research and therefore we quit

play17:55

could that have been the case could it

play17:57

just be a case of not getting enough

play17:59

resources and looking elsewhere to get

play18:02

those resources to pursue their research

play18:04

projects let me know what you think but

play18:05

keep this in mind we're going to have

play18:07

more and more discussions like this in

play18:09

the world on TV on Twitter on Facebook

play18:12

as more and more of the world's

play18:14

population gets dragged into this

play18:16

conversation get ready for some pretty

play18:18

wild takes but whatever the case my name

play18:21

is Wes rth and thank you for watching

Rate This

5.0 / 5 (0 votes)

Related Tags
AI SafetyOpenAIInternal ConflictHumanityTech IndustryResearch PrioritiesAI AlignmentElon MuskSam AltmanCultural Shift