AI NEWS OpenAI vs Helen Toner. Is 'AI safety' becoming an EA cult?
Summary
TLDRThe video discusses recent controversies surrounding OpenAI, focusing on the dismissal of Sam Altman and the subsequent fallout. It examines claims made by former board member Helen Toner, who alleges being kept in the dark about AI developments and accuses Altman of a history of deceit. The video also critiques the effective altruist movement's influence on AI safety, highlighting their extreme views on halting AI progress and the potential for global surveillance. The narrative questions the motives behind these actions and urges viewers to consider the broader implications of letting a vocal minority dictate AI regulation.
Takeaways
- 🗣️ A former OpenAI board member, Helen Toner, has spoken out about the circumstances surrounding Sam Altman's firing, sparking controversy and debate within the AI community.
- 🔍 Helen Toner claimed that she and others were kept in the dark about significant developments at OpenAI, such as the launch of Chat GPT, which they only learned about through Twitter.
- 🚫 OpenAI's current board has refuted Helen Toner's claims, stating that they commissioned an external review which found no evidence of safety concerns leading to Sam Altman's departure.
- 👥 The debate has become somewhat tribal, with people taking sides and supporting the narratives that align with their pre-existing views rather than objectively assessing the situation.
- 💡 There are concerns that the conversation around AI safety is being dominated by a minority with extreme views, potentially skewing the direction of AI regulation and research.
- 🌐 Some individuals within the effective altruist movement are pushing for stringent global regulations on AI development, including bans on certain technologies and surveillance measures.
- 🕊️ The term 'AI safety' has been co-opted by groups with apocalyptic views on AI, leading to confusion and a tarnishing of the term for those working on legitimate safety concerns.
- 💥 There is a risk that the focus on existential risks from AI could overshadow more immediate and tangible concerns about AI's impact on society and the need for practical safety measures.
- 📉 The influence of certain organizations and individuals with extreme views could have negative repercussions on the AI industry, potentially stifling innovation and progress.
- 🌟 The video script emphasizes the importance of balanced and evidence-based discussions around AI development and safety, rather than succumbing to fear-mongering or cult-like ideologies.
Q & A
What is the main controversy discussed in the video script?
-The main controversy discussed is the dismissal of Sam Altman from OpenAI and the subsequent claims and counterclaims made by various parties, including Helen Toner, an ex-board member, and the current OpenAI board.
What was Helen Toner's claim about the Chad GPT revelation?
-Helen Toner claimed that she and the board learned about Chad GPT on Twitter, suggesting they were kept in the dark about this significant AI breakthrough.
How did OpenAI respond to Helen Toner's claims?
-OpenAI responded by stating they do not accept the claims made by Helen Toner and another board member. They commissioned an external review by a prestigious law firm, which found that the prior board's decision did not arise from product safety or security concerns.
What is the significance of GPT 3.5 in the context of the video?
-GPT 3.5 is an existing AI model that was available for more than 8 months before the release of Chat GPT. It signifies that the technology behind Chat GPT was not new, but its user interface and format as a chat application became popular.
What was the claim made by Helen Toner about Sam Altman's past?
-Helen Toner claimed that Sam Altman had a history of being fired for deceitful and chaotic behavior, including from Y Combinator and his original startup, Loopt.
How did Paul Graham, the founder of Y Combinator, respond to the claim about Sam Altman's dismissal from Y Combinator?
-Paul Graham clarified that Sam Altman was not fired but rather agreed to step down from Y Combinator to focus on OpenAI when it announced its for-profit subsidiary, which Sam was going to lead.
What is the concern regarding the influence of the Effective Altruism (EA) movement on AI policy?
-The concern is that the EA movement, with its belief in the imminent risk of AI superintelligence and potential existential threats, may be pushing for extreme regulatory measures that could stifle innovation and progress in AI.
What is the view of some researchers and experts on the existential risk posed by AI?
-Some researchers and experts believe that while existential risks could emerge, there is currently little evidence to suggest that future AIs will cause such destruction, and more pressing, real-world concerns about AI should be addressed.
What is the criticism of the EA movement's approach to AI safety?
-The criticism is that the EA movement has hijacked the term 'AI safety' and focuses on extreme doomsday scenarios, which overshadows more practical and grounded concerns about AI's impact on society and the need for sensible regulations.
What is the argument made by the video script against the extreme regulatory measures proposed by some AI safety advocates?
-The argument is that extreme measures, such as global bans on AI training runs or surveillance on GPUs, are not rational and could have disastrous consequences, such as nuclear conflict, which should not be the basis for governing and regulating AI development.
Outlines
🤖 AI Controversy and Board Member's Claims
The video delves into the controversy surrounding the dismissal of Sam Altman from OpenAI, with a focus on the claims made by Helen Toner, an ex-board member. It discusses the community's divided opinion and the 'bombshell' revelations from Toner's interview, such as the allegation that the board was kept in the dark about 'Chat GPT' until it was revealed on Twitter. The video also mentions the response from the current OpenAI board, which refutes Toner's claims and highlights an external review conducted by the law firm WilmerHale that found no evidence of AI safety concerns leading to Altman's departure.
🔍 Misrepresentations and the Reality of AI Developments
This paragraph addresses perceived misrepresentations made by Helen Toner regarding the OpenAI situation. It clarifies that the technology behind 'Chat GPT' was not a secret and had been available for months, suggesting that Toner's claim of being informed about it on Twitter might be an exaggeration. The paragraph also refutes the claim that Sam Altman was fired from Y Combinator for deceitful behavior, with Paul Graham, a mentor to Altman, clarifying that Altman's move was a mutual decision to focus on OpenAI rather than a dismissal.
🧐 The Influence of Effective Altruism on AI Policy
The video script discusses the influence of the Effective Altruism (EA) movement on AI policy, suggesting that some members have extreme views on AI safety, such as the belief in an imminent AI superintelligence that could lead to humanity's extinction. It raises concerns about the EA's approach to AI regulation, which includes ideas like banning certain hardware and enforcing global surveillance to prevent AI development. The paragraph also highlights the potential negative impact of these beliefs on the broader conversation around AI safety and policy.
💡 The Distortion of AI Safety Discourse
This section of the script criticizes the distortion of the AI safety discourse by certain groups with extreme views on AI, which it associates with the Effective Altruism movement. It argues that these groups are overshadowing more grounded and pressing concerns about AI's real-world applications and potential harms, such as the impact on marginalized communities. The video calls for a more balanced and evidence-based approach to AI safety, rather than one driven by fear of an AI apocalypse.
🛑 The Risks of Overzealous AI Regulation
The speaker expresses concern over the potential risks associated with overzealous AI regulation, particularly that advocated by certain groups within the Effective Altruism movement. The paragraph outlines extreme regulatory measures such as making hardware illegal or imposing pervasive surveillance on data centers. It emphasizes the need for a more nuanced and practical approach to AI regulation that doesn't stifle innovation and progress.
🌐 The Importance of Openness in AI Development
In this paragraph, the script highlights the benefits of open-source AI and the importance of sharing knowledge to improve security and prevent vulnerabilities, drawing a parallel with cybersecurity. It contrasts this with the views of a minority advocating for extreme measures such as global surveillance and potential military conflict to halt AI development. The speaker argues against letting such extreme perspectives govern and regulate AI, advocating for a balanced and rational approach to its development and safety.
🚀 Balancing Progress and Caution in AI Development
The final paragraph wraps up the discussion by emphasizing the need to balance progress and caution in AI development. It criticizes the extreme views that suggest halting all AI progress and instead calls for a reasoned, evidence-based approach to managing risks. The speaker encourages viewers to consider multiple perspectives on AI safety and to be wary of letting cultish ideologies dictate the future of AI regulation and development.
Mindmap
Keywords
💡Open AI
💡Sam Altman
💡Helen Toner
💡AI Regulation
💡AGI (Artificial General Intelligence)
💡Effective Altruism
💡AI Safety
💡EA (Existential AI Risks)
💡WilmerHale
💡Chad GPT
💡Y Combinator
Highlights
Ex-board member of OpenAI, Helen Toner, speaks out on Sam Altman's firing.
OpenAI's response to Helen Toner's claims, denying the allegations made by her.
External review by law firm WilmerHale found no AI safety concerns led to Altman's departure.
Chad GPT's release was not a secret, contrary to Toner's claims; it was based on existing GPT 3.5 technology.
Paul Graham clarifies Sam Altman was not fired from Y Combinator, contradicting media narratives.
Critique of Helen Toner's actions and EA (Effective Altruism) movement's influence on AI policy.
Concerns about EA's extreme views on AI regulation, including global bans and surveillance.
EA's shift from humanitarian goals to focusing on AI doomsday scenarios.
Debate on the validity of EA's claims about AI's potential to cause human extinction.
Impact of EA-funded research on the AI policy landscape in Washington.
Criticism of EA's approach to AI safety versus other researchers with different focuses.
Discussion on the need for balanced AI regulation that accounts for both risks and benefits.
The importance of not letting extreme views dominate AI development and policy.
EA's self-identification as part of AI safety and the confusion it causes in the field.
The potential consequences of EA's influence on global AI development and international relations.
The contrast between EA's apocalyptic views and more nuanced perspectives on AI's future.
The call for a rational and balanced approach to AI development and its associated risks.
Transcripts
big AI news today so an ex-board member
of open AI comes out talking about what
actually happened during Sam alman's
firing we covered that a few days ago
her name is Helen toner but today we get
the opening eye response as well as some
other people that weigh in on whether or
not some of the claims that she makes
are truthful or not so let's take a look
at that but before we do take a listen
to this 30 second clip of Eliza Yosi and
tell me do you agree with what he's
saying do you agree with how he thinks
we should regulate AI how you feel about
what you will hear today will greatly
depend on whether you agree with what he
is saying should this type of research
and development be made against the law
yeah basically I think that we should
track all the gpus have international
arrangements for all of the AI all the a
training Tech to end up in only
monitored supervised licensed data
centers and Allied countries and and
just like not permit training runs more
powerful than GPT 4 like that whole line
of reasoning is not that regulating the
stuff will protect you from a super
intelligence because it will not that's
more in the hopes that people change
their minds later maybe after some major
disaster that doesn't kill everyone you
don't like press the off switch to deal
with the super intelligence the super
intelligence does not let you know that
you need to press the off switch until
you are already dead AGI rolls around
only once subscribe so Helen toner comes
out spilling the beans on what happens
during that whole opening ey Fiasco
where Sam Alin got fired tons of people
had to be brought in to kind of manage
the situation and figure out who's going
to be running open AI moving forward one
thing that's obvious to me is that the
community the people following this you
I everybody else were very divided on
what actually happened who is at fault
who is to blame who's telling the truth
who's being honest so the kind of big
bombshell revelations in Helen toner's
interview were the following the biggest
one that I think a lot of people quoting
is the fact that they learned about Chad
GPT on Twitter that was a line that she
used to kind of point out that they were
being kept in the dark about chadt this
great new breakthrough in AI technology
also she mentioned that Sam had a
history of being fired for his deceitful
and chaotic Behavior as she puts it
maybe she was quoting somebody but she
was saying he was fired from why
combinator he was fired from looped his
original startup and also that Sam
didn't inform the board that he owned
the open AI startup fund now we've
covered that interview already but since
then there's been a lot more sort of
Revelations about what's actually been
happening some people had a chance to
kind of reply back to some of these
allegations so today let's take a look
at it now unfortunately I know that a
lot of people this seems to be becoming
kind of a tribal thing right so you have
red team blue team and you're just
rooting for the people that you want to
win I don't know if that's the right
approach here I think as you'll find out
there's a lot more gray area here than
at first meets die but let's take a look
so first of all all here is the opening
ey board the current one responds to
Helen toner about her claims they're
saying we do not accept the claims made
by Miss toner and Miss mcau regarding
events at open AI so those were the two
board members that were kicked off after
the firing of Sam Albin so they're
saying the first step we took was to
commission an external review of events
leading up to Mr Altman's forced
resignation and this is true they hired
a as they call a prestigious Law Firm
we've covered this when this happened
Wilmer hail they led the review they
conducted dozens of interview with
members of open ey's previous board
including Miss Stoner and Miss mcau
openi executive advisers to the previous
board and other pertinent Witnesses
reviewed more than 300,000 documents and
evaluate various corporate actions and
both mm mcau provided ample input to the
review there you find it rejected the
idea that any kind of AI safety concern
necessitated Mr alman's replacement and
that legal company found that the prior
board's decision did not arise out of
concerns regarding product safety or
security the pace of development opening
eyes finances or its statements to
investors customers or business partners
they say we regret that Miss toner
continues to revisit the issues that
were thoroughly examined by Wilmer hail
LED review rather than moving forward m
t has continued to make claims in the
Press although perhaps difficult to
remember now openi released chpt in
November 2022 as a research project to
learn more about how useful its models
are in conversational settings it was
built on GPT 3.5 an existing AA model
which has already been available for
more than 8 months at the time this is
an important thing to understand that
this technology was available everyone
knew about it there were companies like
for example Jarvis AI I think was called
and then I think they changed their name
to Jasper AI but it was basically for
writing SEO optimized articles they were
running on GPT 3.5 for a long long time
if you look at open ai's YouTube channel
this was 2 years ago so this was August
10th
2021 that seems like a lot more than two
years ago so this is the open AI codex
that ilas sover and Greg are presenting
here which already resembles a back and
forth kind of chat application but this
one is for coding this was codex and
they're inviting people to participate
to use the API to use it for their own
needs so there wasn't really anything
new that was released Chad GPT was a
little demo nobody thought it was going
to blow up the way it did they took an
already existing technology that
hopefully everyone was aware of if
you're in the board I hope you were
aware what the company was working on
and they just packaged it up in a chat
format and they put out there for
research purposes and it blew up again I
don't think anyone quite expected that
it would become the fastest growing
apple of all time do you think that
could have been possible to just predict
that would happen I remember Elon Musk
posting that it's scary good and I think
that's what got me to try it initially
so her saying that she was made aware of
Chad PT on Twitter
not exactly sure what that means either
she just wasn't paying attention to what
was happening in the company cuz again
all those pieces of Technology were
available they were available to the
public there was the playground there
was the API the only thing that changed
was the user interface and that user
interface just clicked with everybody it
opened up everyone's eyes to what was
possible so I think it's one of those
things where she's seeing the truth but
the reason people are reposting it
because it sounds so much scarier like
she had no idea this product was
released it was already released it was
just a different UI that was kind of put
in place I think that's a fair thing to
say because again the playground the web
page where you can mess around with it
that was already available API was
already available so that's Point number
one that she made that to me seems like
a dubious claim it's either made to
sound Sensational when it's it's really
not I'm not sure but it just something
about it feels a little bit fishy but
let's continue she also said that Sam
Alman was fired from a y combinator for
his deceptive and chaotic Behavior So
Yesterday tons of newspapers popped up
with their stories about how Paul Graham
fired Sam malman from y combinator so
Graham who was something of a mentor to
the young Tech Guru to Sam Alman flew
from the United Kingdom to San Francisco
to personally give his prote the boot to
fire him here's the Washington Post
albin's polarizing past hints at opening
eyes board's reason for firing him so it
says same thing that Graham flew from
United Kingdom to San Francisco to give
his prote the boot he fired Sam Alman
before we continue can we agree that
that's what these newspapers say right
this the impression that they give you
if you had to summarize it to somebody
would that be an accurate summarization
of what it says here I think so right
here's the problem with that Paul Graham
today commented on what actually
happened he said I get tired of hearing
that white commentator fired Sam so
here's what actually happened for
several years he was running both YC and
openi but when open announced that it
was going to have a for-profit
subsidiary and that Sam was going to be
the CEO we specifically Jessica told him
that if he was going to work full-time
on openi we should find someone else to
run white combinator and he agreed if he
said that he was going to find someone
else to the be the CEO open thei so that
he could focus 100% of white combinator
we would be just fine with that too we
didn't want him to leave just to choose
one or the other now some people in the
comments are trying to push this to say
that well that's what firing is right
that's the same thing and Paul responds
no we would have been happy if he stayed
and got someone else to run open AI
can't you read I mean if you are okay
with a person staying and running your
company if the issue is you just don't
want to have a split Focus or
potentially conflict of interest you're
saying hey either you're 100% here or
100% there the fact that you're okay
with them being 100% running your
company that's not the same as firing
that's not the same as you know as the
Washington Post puts it giving him the
boot how are you want to phrase that
that's not what happened here so again
that to me seems like another lie that
everybody bought from Helen toner and
again as we covered yesterday the issue
here is that some of the organizations
that she is somehow affiliated with do
specifically tell the members you know
here are our talking points here's what
we actually believe right on one hand
and on other hand here's our talking
points for the normies for the people
that might not agree with us here's a
post by Nathan Lans saying Paul Graham
says a story about Sam Alton being fired
from White commentator is not true I
think there are many cases like this
where people are assuming things bad
about Sam that probably aren't true I've
never met Sam but I've only ever heard
great things about him as a person most
say he's one of the most genuinely nice
and intelligent people they've ever met
which is again before a lot of the stuff
that was happening with open AI you
would only hear nice things about Sam
Alman by the way me personally I'm never
surprised that some of these high
charging people aren't quot unquote nice
I think deep down to go that hard after
some of these goals you kind of have to
be a bit of a killer look at Steve Jobs
right when his biography was released
and we started learning about how he was
and I mean there were rumors about
beforehand but like he wasn't a very
nice guy people didn't really love him
all the time in fact some of the people
that work closely with him afterwards
said that while they really didn't like
working with him because of how he was
later on they reflected saying that
during that time they were pushed harder
and they accomplished more like they
just extracted more from themselves they
the the output was much better much
higher level because of how he pushed
them same thing with Elon Musk right the
latest book that was released about him
talks about demon mode this rage that he
goes into to push certain projects
through to put pressure on people Bill
Gates was a nice guy we're always so
surprised when these high charging
people that achieve so much Against All
Odds aren't super duper nice ell in the
general
remember when it came out that maybe she
wasn't the greatest person to her staff
or whatever and people were shocked but
she dances so well she always does her a
little happy dance how could she be a
bad person so none of this is to say
that any of this is defending Sam Alman
necessarily not saying he's the saint
and anybody against them are the bad
guys and you're free to not like Sam
Alman and disagree with how he's running
things that's not the point of this
video to dissuade you from that I'm just
saying don't buy into everything that's
said especially by people that might
have very specific motives that they
want to push through that might be hard
to convince people to follow you if you
actually save what your motives are
right and we'll talk more about that in
a second here's Rick Burton so I'm not
familiar with this person so take this
with a grain of salt but he's somebody
that came out and spoke against Helen
toner saying I lived in a community with
Helen toner let me tell you what she is
Helen is the very worst that Academia
has to offer she thinks opinions matter
more than actions while she was writing
puff pieces about China Sam Alman was
working she lucked into the open AI
board and Stage a coupe Helen toner has
destroyed value she has created nothing
of value she's not open to open debate
and now she's using her dying voice to
hurt Sam Helen completely misunderstood
what a board does from day one it is
therefore quarterly oversight and acting
as a check on the CEO she never gave a
feedback to Sam she just tried to fire
him this is not what a competent board
does they work on the problems again as
she talks about in the interview they
assumed that Sam if he heard about the
firing if if he got any word about it he
would trying to counter acted somehow so
they purposely set up it in such a way
that he would not have any knowledge of
it they didn't warn him they didn't talk
to him they as far as I can tell never
tried to work the problem out also again
if a board there is for quarterly
oversight then yeah they're probably not
reporting to her every UI change or
every release of an app she must have
known about GPT 3.5 or gpt3 before it
because it was there 8 to 10 months
before the release of Chad GPT which
again was more of a UI change I know it
seems big to us now but the technology
was all there it just the rapper that
got released to the public was much more
popular than anyone could have expected
now one of the reasons that people are
worried about people from a background
in EA the effective altruist
organization if they're somehow linked
to it you know here for example a lot of
people at the nist the staffers would
revolt against expected appointment of
an EA AI researcher the reason being is
that a lot of the beliefs that these
organizations have are probably not the
beliefs that you and I share things they
believe are things like that we're only
months or years away from building an AI
superintelligence able to outsmart the
world's Collective efforts to control it
right so we're potentially months away
from having an AI super intelligence and
what does that mean well according to
Elijah yudkowsky if stopping malignant
AI requires war between nuclear armed
Nations that would be a price worth
paying do you agree with that mentality
like should we pause all AI indefinitely
stop any progress try to control all the
gpus so that nobody's able to research
or or do any work with AI and then spy
on other nations to make sure they're
not doing it and if they are then
nuclear war is an option to shut them
down to prevent them from working on AI
do you agree with that statement this is
political.com by the way that I'm
reading from it continues the profits of
the AI apocalypse are boosted by an
avalanche of tech dollars and also
billions in crypto funds is we'll see in
a second with much of it flowing through
open philanthropy a major funer of
effective altruist causes it's an epic
infiltration said one biocurity
researcher in Washington right and a lot
of these EA people the members of this
movement so they're usually white
typically male and often hail from
privileged backgrounds and like many of
her peers conell calls EA a cult right
Sam bman freed is part of that as some
would say cult right he's convicted for
stealing as much as 10 billion from his
customers they literally believe that
they're saving the world that's their
mission these effective altruists truly
believe what they're saying about AI
safety the idea that within a few months
or a few years it'll cause the
extinction of the human race unless we
stop all progress on it now and this is
the problem with all of that and this is
the thing that I hope more people
understand is that they refer to
themselves as being part of AI safety
they've kind of hijacked that term
instead of saying they're ai doomers ai
apocalypse you know Terminator 2 AI is
going to turn us all into paper clips
they say AI safety which is a problem
because as many longtime Ai and
biocurity researchers in Washington say
there's much more evidence backing up
their less than existential AI concerns
While most acknowledge the possibility
that existential risks could one day
emerge they say there's so far little
evidence is there any evidence but
they're saying there's little evidence
to suggest that future AIS will cause
such a destruction even when paired with
biotechnology the point here is that we
have much more pressing concerns than
Terminator robots marching down a street
yes we need to be cognizant of that yes
we need to make sure we're not ignoring
the risk of Rogue AI Etc but as AI is
coming into the various software various
businesses various government
organizations we have very pressing very
real concerns about how it's going to be
used and these doomsday scenarios are
corrupting that conversation they're
leading us away from actual things that
matter it's like imagine if you're
trying to regulate Cars and auto safety
things like seat belts making sure that
you have enough lights you know crumple
zones Etc like the safety features of
automobiles but there was this other
group that was slowly taking over
Washington that actually defined Auto
Safety car safety as no we have to get
rid of all cars forever because cars are
dangerous so while the normal reasonable
people are trying to make cars safer the
other people are saying we also want to
make cars safer but their goal is
actually just getting rid of all of it
because they have this belief that one
day cars will rise up and kill everybody
or whatever you probably don't want
those people making the regulations
here's an example where Deborah Raji a
Mozilla fellow in AI researcher at
Berkeley you know her research focused
on how AI can harm marginalized
communities but that was completely
overshadowed by open philanthropy funded
researchers that suggested that llms
like Chad GPT could supercharge the
development of bioweapons and so Raji
saying well if you just look online for
a second you can find all that stuff on
Google the fact that you can get the llm
to regurgitate that stuff if you try
hard enough there's nothing exceptional
about it but her research is left in the
dust because she doesn't have the funds
that these other researchers have that
are concerned about an AI doomsday
scenario as EAS bring their message to
virtually every corter of the nation's
capital experts are warning that the
tech funded flood is reshaping
Washington's policy landscape driving
researchers across many organizations to
focus on the existential risks posed by
new technologies often to the exclusion
of other issues with firmer empirical
grounding as another researcher puts it
I don't want to call myself AI safety
that word is Tainted now right they want
to call themselves something different
like system Safety Systems engineer
right because saying AI safety now might
link them to this cult-like organization
concerned about things that again don't
really have too much proof behind them
by the way the founders of anthropic
used to consider themselves part of EA
now it seems like maybe they're kind of
trying to distance themselves away from
that movement here's stepen Pinker
cognitive scientist at Harvard saying
it's upsetting to read how the shift in
EA from saving lives in Africa because
again that's where they started trying
to do good for the world for Humanity
reduce poverty reduce suffering Etc so
from saving lives in Africa to paying
brainiacs to fret about how AI will turn
us into paperclips may not have been
Mission Drift But bait and switch EA's
core ideas still sound and some some of
its Charities are praiseworthy I hope
the movement regains its bearings here's
yakun saying as I have pointed out
before Aeris is a kind of apocalyptic
cult why would its most vocal Advocates
come from Ultra religious families that
they broke away from because of Science
and the big concern with some of the
people that believe in this stuff if
they're allowed to regulate where we're
going with AI their goals aren't to
create Common Sense regulations like
would most of us are talking about
having some sort of visibility into
who's building what having some sort of
a reporting requirements some safety
limits some safety testing red teaming
efforts Etc right that's that they're
saying that they want Common Sense
regulations that's what they're telling
you but what are they saying behind
closed doors well here is Jan Talon
future of Life institute's co-founder
and one of the biggest ex risk
billionaires here's the plan for
regulatory interventions his hope for
the foreseeable future is the following
I do think that governments certainly
governments can make things illegal well
you can make Hardware illegal you can
also say that yeah producing graphics
cards above certain capability level is
now illegal and suddenly you have like
much much more Runway as a civilization
do you get into a territory of having to
put surveillance on what code is running
in a data center yeah I mean regulating
software is much much more harder than
Hardware if you like let the more slow
to continue then like the surveillance
has to be more and more pervasive so my
focus for the foreseeable future will be
on kind of regulatory interventions and
kind of like trying to educate lawmakers
and kind of helping and perhaps hiring
lobbyists to try to make the world safer
again keep in mind they have billions of
dollars for this billions so they're not
talking about regulating in the way that
you and I think of regulating they're
talking about global ban on any sort of
training runs take a listen nasty secret
of AI uh field is the AI are not built
they are grown the way you you build the
Frontier Model build the Frontier Model
is you take like two pages of code you
put them in tens of thousands of
graphics cards and let them hum for
months
and then you going have open up the hood
and see like what creature brings out
and what you can can you do with this
creature so it's I think the regulate
like indust the capacity to regulate
things uh and kind of deal with various
liability constraints Etc they apply to
what happens after once once this
creature has been kind of tamed uh and
that's what what the uh fine-tuning and
uh reinforcement learning from Human
feedback Etc is doing and then
productized then how do you deal with
with these issues but uh is is this
where we need the competence of of like
other other Industries but like how can
you avoid the system not escaping during
training run this is this is like a
completely novel issue for this species
and we need need need some other
approaches like just Banning those
training grants the other thing I forgot
to mention is Helen toner said that the
reason everyone backed Sam mman because
remember all the employees signed
letters and tweeted those heart emojis
saying that they wanted to stay at open
AI to stay with Sam Alman even though
apparently he was psychologically
abusing everyone Howen T was saying the
only reason they did that is they were
afraid that open AI would be destroyed
under her leadership or if she did what
she was trying to do and then during the
interview she said well that was not
true it would never have been destroyed
that was not on the table but the thing
is that's not what she was saying while
the whole thing was coming out she was
saying the destruction of the company
could be consistent with the board's
Mission she was trying to either have
anthropic absorb openi or destroy it
which would probably again destroy a lot
of value destroy a lot of these people
their hard work the equity that they
have in the business also if you recall
when open AI researchers were fired for
leaking information out of open AI well
at least one of them had ties to that
effective altruism movement so I do
apologize for that rant I know we've
covered some of the stuff before but
seeing this being discussed other people
in the space other YouTubers talking
about AI I was personally a little bit
concerned that a lot of people seem to
be just taking what she's saying at face
value again I'm more than happy to hear
if she has any proof of anything if she
has any specific things that we can look
at and say okay it does look like this
was correct so far it seems like an
attempt to paint Sam Alman bad light
which again is fine I'm not here to
defend Sam you might not like him you
might prefer that somebody else succeeds
with AI maybe Elon Musk or perhaps
Google or anthropic or I know a lot of
us myself included we believe in open
source we think open source AI is an
important technology obviously comes
with some risks as well but does have a
lot of upside so for example open
sourcing some of the cyber security
technology or at least people getting
together and sharing what they learn
about cyber security helps everyone be
more secure right if every company just
kept what they knew to themselves
everyone would be more exposed more
vulnerable because they aren't sharing
their knowledge with open source
everyone can contribute to the knowledge
potentially helping prevent
vulnerabilities and stuff like that but
it's important to understand that
whatever your views on any of these
companies are that some of the voices
that you're hearing come from this small
minority of people that want everything
shut down that want some sort of a
global surveillance surveillance on gpus
limiting how advanced our chips can be
hitting the pause button indefinitely
and as you've seen even potentially
going after other countries with nuclear
weapons so two nuclear Powers duking it
out to prevent the development of AI I
don't know about you that seems a little
bit crazy doesn't it we know we can
destroy the Earth however many time
dozens of times over with the nuclear
weapons that we have takes a few bad
decisions and everyone's gone but
they're willing to flip the coin on that
to make sure that you know AI doesn't
get developed I don't think this is a
very rational perspective this is very
cultish this is from the blog of italic
butan so he contributed close to a
billion dollars to some of these causes
so he's the person behind ethereum
became very wealthy bought some Dogecoin
or some doggy coin and when it did very
well skyrocketed you know he decided to
you know hey let's throw a billion to
these guys but based on his writing and
he kind of addressed this I don't think
he necessarily is aligned with them he's
not part of that AI Doomer mentality and
here he kind of describes the world as
he sees it and his first image here you
have the anti- technology view right if
we move forward forward the future is
dark scary it's bad technological
progress is bad and safety is behind us
so dystopia ahead safety behind we need
to stop progress we need to unwind the
technology the AI Etc and then there's
the accelerationist view that there's
dangers behind and Utopia ahead so the
idea is that technological progress will
help improve the world Humanity Etc and
if we don't we do have dangers behind
and so I think the EA seemingly adopts
this kind of anti- Technology view or at
least anti- Ai and of course we have the
accelerationists saying you know full
steam ahead let's go in his view he's
describing more as yes there are dangers
behind so we have to progress but we
also have multiple paths forward ahead
some good some bad which I think is very
reasonable so with that said my whole
point in this is don't just approach
this from a stance of is open AI good or
bad is Sam Alman good or bad should we
have ai safety I think most people would
agree AI should be deployed safely
there's a lot of different bad things
that can happen we need to be very
careful how we approach it during the
development of the nuclear bomb someone
suggested there's a chance that the
explosion will trigger a chain reaction
of the entire atmosphere basically
getting set on fire right a fiery death
for the entire world right and that
triggered an investigation into it
people looked into they try to figure
out what what's the chance of that
happening they logically look at that
potential X risk smart people who knew
what they were talking about who had
field expertise who had Education and
Training in that particular field
studied that question and came up with
Solutions that's how it should be we do
the same thing when we roll out any
technology new cars new drugs new
electronic devices new kids toys there's
laws and regulations and how we reduce
those
that's very reasonable no one's here is
against that but if there are people out
there that are convinced that we need to
start nuking other nations out of the
face of the Earth if they're developing
AI they think that we should monitor all
Hardware all gpus all software to ban
training runs to create a global
surveillance system to make sure none of
that happens I hope you agree with me
that maybe just maybe we shouldn't let
those people govern and regulate the
world right that's that's not so crazy I
hope you agree with that said my name is
Wes Rob and thank you for watching
Browse More Related Video
![](https://i.ytimg.com/vi/Y1_EPch_MKw/hq720.jpg)
OpenAI is INVESTIGATED by Law Firm. The TRUTH about the Board's Firing of Sam Altman is Revealed...
![](https://i.ytimg.com/vi/j4sYtMpetoc/hq720.jpg)
SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism
![](https://i.ytimg.com/vi/uoupp8kHQY0/hq720.jpg)
Elon Musk's STUNNING Prediction | Sam Altman Attempts to Harness the Power of a Thousand Suns.
![](https://i.ytimg.com/vi/DumPnDqBg5A/hq720.jpg)
20 Surprising Things You MISSED From SAM Altman's New Interview (Q-Star,GPT-5,AGI)
![](https://i.ytimg.com/vi/O77UyYK51s4/hq720.jpg)
Sam Altman CEO of OpenAI | Podcast | In Good Company | Norges Bank Investment Management
![](https://i.ytimg.com/vi/DvnczXEaSiQ/hq720.jpg)
🚩OpenAI Safety Team "LOSES TRUST" in Sam Altman and gets disbanded. The "Treacherous Turn".
5.0 / 5 (0 votes)