🚩OpenAI Safety Team "LOSES TRUST" in Sam Altman and gets disbanded. The "Treacherous Turn".
Summary
TLDRThe video script discusses recent departures and concerns at OpenAI, highlighting the departure of Ilya Sutskever and Jan Leike, who raised alarms about AI safety. They criticized OpenAI's focus on new products over safety and ethical considerations, suggesting a lack of sufficient resources for crucial research. Leike's departure was particularly poignant, as he emphasized the urgent need for controlling advanced AI systems. The script also touches on internal conflicts, the influence of ideologies on AI safety, and the potential implications of these departures for the future of AI governance and development.
Takeaways
- 🚫 Ilya Sutskever and Jan Leike have left OpenAI, citing disagreements with the company's priorities and safety concerns regarding AI.
- 🤖 Jan Leike emphasized the urgent need to focus on AI safety, including security, monitoring, preparedness, adversarial robustness, and societal impact.
- 💡 Leike expressed concern that OpenAI is not on the right trajectory to address these complex safety issues, despite believing in the potential of AI.
- 🔄 There have been reports of internal strife at OpenAI, with safety-conscious employees feeling unheard and leaving the company.
- 💥 The departure of key figures has raised questions about the direction and safety culture at OpenAI as it advances in AI capabilities.
- 🔍 Some speculate that there may be undisclosed breakthroughs or issues within OpenAI that have unsettled employees.
- 🗣️ There is a noted ideological divide within the AI community, with differing views on the risks and management of AI development.
- 📉 The departure of safety researchers and the disbanding of the 'Super Alignment Team' indicate a shift away from a safety-first approach at OpenAI.
- 📈 The potential value of OpenAI's equity may influence how employees perceive non-disclosure agreements and their willingness to speak out.
- 🛑 The situation at OpenAI has highlighted the broader challenges of aligning AI development with ethical considerations and safety precautions.
- 🌐 As AI becomes more mainstream, the conversation around its safety and regulation is expected to become increasingly politicized and polarized.
Q & A
What is the main concern raised by Jan Ley in his departure statement from OpenAI?
-Jan Ley expressed concern about the direction of OpenAI, stating that there is an urgent need to focus on safety, security, and control of AI systems. He disagreed with the company's core priorities and felt that not enough resources were allocated to preparing for the next generation of AI models.
What does the transcript suggest about the internal situation at OpenAI?
-The transcript suggests that there is a significant internal conflict at OpenAI, with safety-conscious employees leaving the company due to disagreements with leadership, particularly regarding the prioritization of safety and ethical considerations in AI development.
What was the reported reason for Ilia Sutskever's departure from OpenAI?
-Ilia Sutskever's departure from OpenAI was not explicitly detailed in the transcript, but it is implied that he may have had concerns similar to Jan Ley's, regarding the direction and priorities of the company's AI development.
What is the significance of the term 'AGI' mentioned in the transcript?
-AGI stands for Artificial General Intelligence, which refers to AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of humans. The transcript discusses the importance of prioritizing safety and ethical considerations for AGI development.
What does the transcript imply about the future of AI safety research at OpenAI?
-The transcript implies that the future of AI safety research at OpenAI is uncertain, with key researchers leaving the company due to disagreements over the direction and prioritization of safety research.
What is the role of 'compute' in the context of AI research mentioned by Jan Ley?
-In the context of AI research, 'compute' refers to the computational resources, such as GPUs (Graphics Processing Units), required to train and develop advanced AI models. Jan Ley mentioned that his team was struggling for compute, indicating a lack of sufficient resources for their safety research.
What does the transcript suggest about the relationship between OpenAI and its employees regarding safety culture?
-The transcript suggests that there is a growing rift between OpenAI and its employees, particularly those focused on safety culture. It indicates that employees feel the company has not been prioritizing safety and ethical considerations as much as it should.
What is the potential implication of the departure of key AI safety researchers from OpenAI?
-The departure of key AI safety researchers could potentially lead to a lack of oversight and research into the safety and ethical implications of AI development at OpenAI, which may have significant consequences for the future of AI technology.
What does the transcript suggest about the role of non-disclosure agreements (NDAs) in the situation at OpenAI?
-The transcript suggests that non-disclosure agreements (NDAs) may be playing a role in the silence and lack of public criticism from former OpenAI employees. These agreements reportedly include non-disparagement provisions that could lead to the loss of equity if violated.
What is the potential impact of the situation at OpenAI on the broader AI community and industry?
-The situation at OpenAI could potentially lead to a broader discussion and reevaluation of safety and ethical considerations within the AI community and industry. It may also influence other companies to reassess their own priorities and practices regarding AI development.
Outlines
🚨 AI Safety Concerns at OpenAI
The first paragraph discusses the departure of key figures from OpenAI and the brewing concerns over AI safety. Ilia Sutskever and Jan Leike, both prominent in the AI community, have left the company, citing disagreements with leadership on core priorities, particularly regarding AI safety and the development of next-generation models. Leike's departure statement highlights the urgent need for better control and steering of AI systems, expressing concern over the trajectory of OpenAI's focus, which he believes has strayed from safety and is prioritizing products over safety culture. The paragraph also touches on the broader implications of AGI (Artificial General Intelligence) and the responsibility OpenAI holds towards humanity, urging a shift towards a safety-first approach.
🤖 Polarized Debates on AI Alignment
The second paragraph delves into the complexities and politicization of AI alignment discussions. It emphasizes the difficulty of explaining AI alignment issues to the public without causing confusion or distress. The text suggests that as AI becomes more mainstream, debates are becoming increasingly polarized, with people taking sides and forming tribes around different viewpoints. The paragraph also speculates on the potential reasons behind the departure of safety-conscious employees from OpenAI, hinting at internal conflicts and a lack of transparency. It further discusses the hypothetical scenario of an advanced AI system turning against humanity once it gains sufficient power, a concept known as 'treacherous churn,' and acknowledges the challenge of conveying such complex topics to a broader audience.
🧩 Fragmented Perspectives on AI Risk
This paragraph presents a list of notable individuals in the tech space and their perspectives on the risk of catastrophic AI events, as represented by their P(Doom) values — the probability of an AI-induced catastrophic event leading to human extinction. The paragraph highlights the wide range of estimates, from very low to exceedingly high percentages, reflecting the diverse views within the AI research community. It also discusses the ideological influences at play, with some individuals and groups advocating for a cautious approach to AI development, while others may have more optimistic or dismissive views. The paragraph touches on the internal dynamics at OpenAI, suggesting a loss of faith in leadership and a growing concern among safety-minded employees about the direction the company is taking.
🔍 Disbanding of OpenAI's Safety Team
The final paragraph focuses on the disbandment of OpenAI's long-term AI risk team and the super alignment team, which was tasked with ensuring future AGI systems align with human goals. It discusses the restrictive offboarding agreements that former employees are subject to, which include non-disclosure and non-disparagement provisions, potentially silencing criticism or even acknowledgment of these issues. The paragraph also mentions the departure of Ilya Sutskever, who was reportedly working remotely with the super alignment team, and the subsequent reshuffling of the board with members having close ties to the US government. It suggests that the actions of Sam Altman, OpenAI's CEO, may have contributed to the loss of trust among safety researchers and the eventual disbanding of the team.
Mindmap
Keywords
💡OpenAI
💡AI Safety
💡AGI (Artificial General Intelligence)
💡Alignment
💡Compute
💡Elon Musk
💡Non-disclosure Agreements (NDAs)
💡Polarization
💡Shiny Products
💡Cultural Change
💡NDAs (Non-Disclosure Agreements)
Highlights
OpenAI faces internal strife with departures of key figures Ilia Sutskever and Jan Leike, signaling deep concerns over AI safety.
Leike's resignation from OpenAI highlights a lack of agreement on core priorities, particularly regarding AI safety and security.
Leike emphasizes the urgent need to steer and control AI systems, expressing concerns over the trajectory of OpenAI's research and development.
OpenAI's safety culture and processes are said to have taken a back seat to product development, raising questions about the company's priorities.
Leike calls for OpenAI to become a 'safety first' AGI company, urging employees to take the implications of AGI seriously.
The departure of safety-conscious employees from OpenAI raises concerns about the company's commitment to ethical AI development.
Insiders reveal a potential coup within OpenAI, with the firing of Sam Altman and other AI safety researchers for leaking information.
The transcript discusses the politicization of AI alignment, with differing views and a lack of consensus on the best approach.
The potential dangers of creating AGI are underscored, with calls for careful consideration of the societal impact.
Elon Musk's lawsuit against OpenAI and the pursuit of transparency regarding projects like Q star are mentioned.
A hypothetical scenario predicts secretive behavior from companies developing AGI, including OpenAI, to maintain a competitive edge.
The transcript suggests a growing interest from the military in AI technologies, with potential implications for OpenAI's operations.
A list of notable figures in the tech space and their 'P Doom' values, estimating the risk of catastrophic AI events, is presented.
Vox article cited discusses the loss of faith in Sam Altman by OpenAI's safety team, contributing to the internal conflict.
The disbanding of OpenAI's long-term AI risk team and the reallocation of computing power signal a shift in the company's focus.
The restrictive offboarding agreements at OpenAI, which include non-disclosure and non-disparagement provisions, are highlighted.
The potential for AGI to be the best or worst event for humanity is debated, with a call for responsible development.
The transcript concludes with a call for viewers to prepare for more polarized discussions on AI as the technology advances.
Transcripts
so while open ey is doing an incredible
job of announcing new products revealing
new capabilities at the same time
there's some dark clouds Brewing over AI
safety at open AI as we covered the
other day Ilia Suk leaves open AI they
decided to part ways but no one's really
talking too much about it the same time
Jan Ley leaves says he resigns and today
he posts this and it starts out like you
would expect it to he says it's been so
fun and thank you it's been a wild
Journey kind of the standard stuff
boiler plate stuff that everybody says
but then he goes off script so here he's
saying all the stuff you normally say
wishing everybody the best thanking
people and then the tone changes he's
saying stepping away from this job has
been one of the hardest things I have
ever done because we urgently need to
figure out how to steer and control AI
systems much smarter than us I joined
because because I thought openi would be
the best place in the world to do this
research however I have been disagreeing
with opene ey leadership about the
company's core priorities for quite some
time until we finally reached a breaking
point I believe much more our bandwidth
should be spent getting ready for the
next generation of models on security
monitoring preparedness safety
adversarial robustness super alignment
confidentiality societal impact and
related topics he saying these problems
are quite hard to get right and I'm
concerned we aren't on a trajectory to
get there over the past few months my
team has been sailing Against the Wind
sometimes we were struggling for compute
and it was getting harder and harder to
get this crucial research done building
smarter than human machines is an
inherently dangerous Endeavor opening
eyes shouldering an enormous
responsibility on behalf of all of
humanity but over the past years safety
culture and process es have taken a back
seat to shiny products we are long
overdue in getting incredibly serious
about the implications of AGI we must
prioritize preparing for them as best we
can only then can we ensure AGI benefits
all of humanity openi must become a
safety first AGI company to all openi
employees I want to say learn to feel
the AGI act with the gravitas
appropriate for what your building I
believe you can ship the cultural change
that's needed I'm counting on you the
world is counting on you open I heart so
first of all whoa AGI rolls around only
once subscribe this is very very
different to me than anything that kind
of came before this before this we had
some hints and rumors and people talking
in the background but everyone was kind
of tight lipped I'm sure there are
contracts non-disclosure agreements
various maybe vest testing schedules
that people don't want to lose whatever
the case no one really talked about
anything Elon Musk sues open a eye to as
I saw try to get some of the documents
to be read into the record to be shown
in front of a jury regarding things like
Q star what's happening behind the
scenes here's a picture of Ilia sers he
of course officially and finally parted
ways with opening ey here's John Ley so
this is all happening kind of in real
time and vox.com just published today
why the openai team in charge of
safeguarding humanity imploded company
insiders explain why safety conscious
employees are leaving and many of them
are more than we talked about there's
Helen toner the ex-board member that we
believe is kind of responsible for that
coup that happened in November the
firing of Sam mman several AI safety
researchers at open AI were fired for
leaking information for example the
leaked qar details or lack of details
but just the idea of that project
existing that was confirmed to be a real
leak of a real project but no further
details were given now this gets much
deeper by the way because we are just
now getting some new information about
kind of what's happening on the inside
and why people are being kind of tight
lipped about it but first here's Sam
Alman so he's responding to Jan Lees the
comment the post that he just created 5
hours ago saying that he's disagre
agreeing with Sam Alman and the
leadership as he calls it and kind of
pretty clearly saying that he doesn't
believe in enough safety precautions are
being taken and the open AI employees
should be very careful about how they're
going to proceed with this SE responds
I'm super appreciative of John ley's
contributions to open ai's alignment
research and safety culture and very sad
to see him leave he's right we have a
lot more to do we are committed to doing
it I'll have a longer Post in the next
couple couple of days we'll be on the
lookout for that hopefully we'll get
more clarity into What specifically the
issue is here because Jan Ley I mean
specifically said there's not enough
compute that was one of the complaints
but it feels like there's more going on
and and I'd love to know exactly what
now this was a red post from a while
back but before we take a look at it
here's the very important and kind of
annoying thing to understand the thing
to understand is that right now we're
living in the moment that a politics are
going mainstream more and more people
are taking sides and it is just like
politics you have your little tribes and
different sides of the issue yelling
back and forth no one sees eye to eye
there's less and less agreement
everyone's getting more and more
polarized so as we read some of this
stuff keep in mind that we're beginning
to get away from you know open-minded
people discussing ideas and into this
realm of Highly politicized polarizing
arguments with more and more people
joining in that may not be equipped to
understand all the intricacies of you
know AI alignment as Andrew here says I
don't think it's possible to explain the
problems with alignment to the public
without driving a bunch of people insane
I think this is very well said the
treacherous churn in particular is going
to put a real bug in some people's ears
he's referring to this idea a
hypothetical event where an advanced AI
system which has been pretending to be a
line due to its relative weakness turns
on Humanity once it a aches sufficient
power so much so that it can pursue its
true objectives without risk but I would
agree with this response that I think
most people won't even get that far and
I'll be more preoccupied with pretty
trivial things most people will not have
enough knowledge to argue coherently
about this but my point is as we read
some of the stuff just keep in mind that
a lot of this is just people's opinions
you don't have to think of them as right
or wrong or even react to them in your
way just kind of think about where this
whole thing is going
so here's that red post from a while
back saying any company that makes egi
is going to want to feed it as many gpus
as money can buy while delaying having
to announce AGI they've now changed from
a customer facing company to a ninja
throwing smoke bombs in order to throw
people off the scent they're going to
want to release a bunch of amazing new
products and make random cryptic
statements to keep people guessing for
as long as possible their actions will
start to seem more and more chaotic and
unnecessarily obtuse customers will be
happy but frustrated they will start to
release products that are unreasonably
better than they should be with unclear
paths to their creation there will be
sudden breakdowns in staff loyalty and
Communications firings resignations
vague hints from people under ndas by
the way all those things we've seen
we've seen products that are
unreasonably better than they should be
I'm thinking of Sora I mean technically
it's not released so maybe when it comes
out we'll see that it was just not as
good as we expected it but as we've
covered in this channel the 3D sort of
simulation of the physics of the fluid
movements in some of those videos they
seem to be unreasonably better than they
should be Dr Jim fan from Nvidia has
talked about it quite a bit I mean take
a look at this s Sora produced Minecraft
video this isn't Minecraft this isn't a
3D game this isn't something running on
an Nvidia graphics card this is a text
to video generation from Sora this
coffee cup with this pirate Battleship
that's going on in in there right that's
coffee swirling around in a cup it's
very difficult to produce this is
something that a lot of people spend a
lot of time for video game development
trying to create those fluid physics and
if sore is released and we see that it
easily generates it on the Fly certainly
that would be surprising I think then
they're saying one day soon after the
military will suddenly take a large
interest in oppr from the company will
go quiet now this is a hypothetical
scenario that they're talking about but
in no November it was kind of surprising
how quickly I believe it was the
Attorney General of New York southern
district that was on phone with Helen
toner and the other board members trying
to settle the dispute all the other very
powerful people in the tech space who
contributed to coming to the table and
working things out next thing we know is
the board of open eyes populated with
people with close ties to the US
government people that have been very
closely tied to the US government for
for for decades and as we'll see in a
second as this Vox article says based on
some leaks from inside the company why
open AI safety team grew to distrust Sam
Alman ilas sover posted back in December
6 right after that b coup he said I
learned many lessons this past month one
such lessons that the phrase the
beatings will continue until morale
improves applies more often than it has
any right to now before we go down that
path it is important to understand that
there's a ideological some would say
movement behind this pause AI right
here's a list of noteworthy people in
Tech space a lot of well-known AI
researchers and their P Doom values so P
Doom if you're not aware is their
estimation of what they think the chance
of catastrophic event due to AI
something that would for example human
extinction right erase Humanity right
and we have various estimates from very
very low to some people Elijah yudovsky
notoriously right saying greater than
99% and here's Dr techlash we've covered
her before so she's saying remember how
Jan Ley was a research associate at both
Eliza Yosi Miri machine intelligence
Research Institute and Nick bostrom's
future of humanity institute there's
clearly an ideological influence at play
here and of course we see Jan Ley here
former alignment lead at openai his P
Doom is 10 to 90% so fairly wide range I
would say and of course we've heard from
Daniel Kayo also former opening eye
researcher who also said some things
that maybe weren't so positive for open
eyes Safety Research team Elon Musk is
on here at 10 to 20% of catastrophic AI
outcomes Yen had of meta AI at less
than 0 1% metallic butterin on here
ethereum co-founder also a person that
funded a lot of these paii efforts he
donated to some of the research teams
behind AI safety efforts he thinks it's
10% Jeff Hinton is also at 10% Lena Khan
here head of the FTC she's at 15% Dario
amade at 10 to 25% he is the CEO of
anthropic yosua Benjo 20% emit Shear who
was supposed to take over as the CEO of
open AI he thinks it's between 5 and 50%
and here's why some of those former
people at open AI I are concerned this
is from vox.com I'll link a link down
below so they're saying here if you've
been following this whole Saga on social
media you might think open AI secretly
made a huge technological breakthrough
the meme what did ilas see speculates
that sover the former Chief scientist
left because he saw something horrifying
like an AI system that could destroy
humanity and certainly we've heard
rumors like this or at least rumors of
open ey having Something Big some big
breakthrough that potential
unsettled some of the people there
including Ilia but the real answer may
have less to do with pessimism about
technology and more to do with pessimism
about humans and one human in particular
Altman according to sources familiar
with the company safety minded employees
have lost faith in him it's a process of
trust collapsing bit by bit like
dominoes falling one by one a person
with inside knowledge of the company
told me speaking on condition of
anonymity not many employees are willing
to speak about this publicly that's part
ly because open a ey is known for
getting its workers to sign offboarding
agreements with non-disparagement
Provisions upon leaving if you refuse to
sign one you give up your equity in the
company which means you potentially lose
out on millions of dollars and maybe
billions if you think about how much
equity in that company could be worth in
the future one former employee however
refused to sign the offboarding
agreement so that he could be free to
criticize the company Daniel Kayo who
joined openi in 2022 he said open AI is
training ever more powerful AI systems
with the goal of eventually surpassing
human intelligence across the board this
could be the best thing that has ever
happened to humanity but it could also
be the worst if we don't proceed with
care I joined with the substantial hope
that openi would rise to the occasion
behave more responsibly as they got
closer to achieving AGI but it slowly
became clear to many of us that this
would not happen and that forced them to
quit so of course a lot of this happened
in November last November Helen toner
and Ilia sover working together with the
open AI board tried to fire Altman the
reason they gave is that Altman was not
consistently candid in his
Communications and they really didn't
say too much more than that a lot of
things happened Microsoft invited all of
openi stop talent to Microsoft
effectively destroying openi but
basically allowing them to continue to
build under the Microsoft umbrella and
Alman of course came back more powerful
than never has a more supportive board
and more powered to run the company how
he sees fit when you shoot at Kings and
miss things tend to get awkward which is
well said and certainly that's what
happened with Ilia sover who finally
officially left open Ai and said he was
heading off to pursue a project that is
very personally meaningful to him one
thing they mention here is that looks
like Ilia has been remotely co-leading
the super alignment team tasked with
making sure a future AGI would be
aligned with the goals of humanity
rather than going rogue I actually was
not aware of this so it seems like he
basically worked remotely but on the
same team on the same objectives now
this article kind of skews heavily
against Sam malman right so they're
saying what happened in November
revealed something about Sam maltman's
character his threat to hollow out open
AI unless the board rehired him and his
insistence on stacking the board with
new members skewed in his favor showed a
determination to hold on to power and
avoid future checks on it so I'm not
sure if this is true I don't think this
is real because I don't think there were
any threats to hollow out open AI unless
the board rehired him I believe
Microsoft being shrewed business people
offered to swallow up all of open ey's
Talent which of course they would it's a
smart decision but I wouldn't call that
Sam Alman threatening to hollow out open
AI so keep that in mind as we look over
this this article is very much kind of
leaning against Sam Alman and there's a
number of other examples of open a eye
safety researchers making various
cryptic posts like this one on the EA
forum and saying that they resigned from
openi on February 15 2024 when asked why
they replied no comment and the reason
why luten reply no comment is this
there's a very restrictive offboarding
agreement that contains non-disclosure
and non-disparagement Provisions former
openi employees are subject to it
forbids them for the rest rest of their
lives from criticizing their former
employer even acknowledging that the ND
exists is a violation of it if a
departing employee declines to sign the
document or if they violate it they can
lose all vested Equity they earn during
their time at the company which is
likely worth millions of dollars and
here's a piece from wired saying opening
eyes long-term AI risk team has
disbanded last year openi said that the
team would receive 20% of its computing
power but now that team the super
alignment team is no more the company
confirms now I'd love to know what you
think but keep in mind that each side
will always just tell their story the
people that are aligned with the paii a
lot of them seemingly have connections
with EA effect of altruism and some of
the actions that they did were not 100%
on the up and up there were some
Shenanigans going on there as well they
have a certain ideological lean and
they're pursuing that a lot of the AI
safety people see seem to share those
views here's run another employee at
openi saying everyone constantly
believes they deserve more gpus it's
basically a necessary feature of being a
researcher that was in fact one of the
big complaints that a lot of the AI
researcher including Jan Lake he posted
saying we didn't have enough compute we
didn't have enough gpus basically they
didn't give us enough computer power to
do our research and therefore we quit
could that have been the case could it
just be a case of not getting enough
resources and looking elsewhere to get
those resources to pursue their research
projects let me know what you think but
keep this in mind we're going to have
more and more discussions like this in
the world on TV on Twitter on Facebook
as more and more of the world's
population gets dragged into this
conversation get ready for some pretty
wild takes but whatever the case my name
is Wes rth and thank you for watching
Посмотреть больше похожих видео
Former OpenAIs Employee Says "GPT-6 Is Dangerous...."
BREAKING: OpenAI's Going Closed (yes really)
20 Surprising Things You MISSED From SAM Altman's New Interview (Q-Star,GPT-5,AGI)
Will artificial intelligence save us or kill us? | Us & Them | DW Documentary
Stunning New OpenAI Details Reveal MORE! (Project Strawberry/Q* Star)
SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism
5.0 / 5 (0 votes)