OpenAI Is FALLING Apart. (Ilya Sutskever Leaving, Super alignment Solved? Superintelligence)
Summary
TLDRThe video script discusses the recent developments and challenges within the field of artificial intelligence, particularly focusing on OpenAI. It highlights the departure of key figures like Ilya Sutskever and the appointment of Jacob as the new Chief Scientist. The script delves into the concept of AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence), emphasizing the urgency of solving the alignment problem to ensure these advanced systems work in humanity's best interest. It also touches on the competitive landscape with companies like Meta investing heavily in AI, aiming to build and distribute general intelligence responsibly. The content warns of the potential risks if AGI and ASI are not developed and managed carefully, suggesting that safety research must be a priority. The video aims to keep viewers informed about the rapid advancements and ethical considerations in AI technology.
Takeaways
- đ Ilya Sutskever, a key figure at OpenAI, has left the company and is pursuing a personally meaningful project, with details to be shared in due time.
- đšâđŹ Jackob, a prominent researcher, has been appointed as the new Chief Scientist at OpenAI, taking over from Ilya Sutskever.
- đ OpenAI has been experiencing a series of departures, including members of the Super Intelligence Alignment team, which could impact the company's progress on AI safety.
- 𧔠The concept of Super Intelligence (SI) is complex, and there's uncertainty around how to align such a system, with some suggesting that a virtual system could bootstrap itself to solve alignment for subsequent generations.
- đš Concerns have been raised about the potential risks of AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence), including the possibility of a 'winner-takes-all' scenario.
- âł Predictions for the arrival of AGI range from as early as 2024 to a median estimate of 2029, with some experts suggesting it could be achieved by the end of this decade.
- đĄ There is a significant investment in AI research and infrastructure by major companies, indicating a belief in the near-term potential of AGI.
- đ Control of AGI and subsequent ASI could lead to unprecedented power and capabilities, potentially resulting in a significant shift in global dynamics.
- đ€ The 'black box' problem of AI remains a challenge, as the inner workings and decision-making processes of advanced AI models are not fully understood.
- đ The alignment problem is a significant concern, with current strategies involving using each generation of AI to help align the next, though this approach is not without risks.
- âïž Companies are focusing on developing AGI to gain a competitive edge, but there is a call for more emphasis on safety and ethical considerations in AI development.
Q & A
What is the significance of Ilya Sutskever's departure from OpenAI?
-Ilya Sutskever's departure is significant as he is considered one of the greatest minds in the field of AI. His leadership and contributions were instrumental in shaping OpenAI, and his exit marks a notable change in the company's technical direction and future projects.
Who is taking over Ilya Sutskever's role at OpenAI?
-Jackob, who has been with OpenAI since 2017 and has led transformative research initiatives, including the development of GPT-4 and fundamental research in large-scale reinforcement learning and deep learning optimization, is taking over Ilya Sutskever's role as the new Chief Scientist.
What was Ilya Sutskever's statement regarding his departure from OpenAI?
-Ilya Sutskever expressed that after almost a decade, he decided to leave OpenAI. He praised the company's trajectory and expressed confidence in its future under its current leadership. He also mentioned that he was excited about a personally meaningful project he would be sharing details of in due time.
What is the 'Super AI' that OpenAI is working towards, and what are the concerns associated with it?
-Super AI, or artificial superintelligence (ASI), refers to a system that surpasses human intelligence in virtually all areas, capable of creating new knowledge and making discoveries beyond human comprehension. The concern is that such a system could potentially go rogue, and if not properly aligned with human values and goals, it could lead to unpredictable and potentially disastrous outcomes.
What is the 'alignment problem' in the context of AI?
-The alignment problem refers to the challenge of ensuring that AI systems, particularly superintelligent ones, act in a way that is aligned with human values and interests. It is a significant issue because as AI systems become more advanced, they may develop goals or behaviors that are misaligned with what is beneficial for humanity.
Why is there speculation that OpenAI may have solved the AGI alignment problem?
-Speculation arises from the departure of key members of the Super AI team, like Ilya Sutskever and Jan Leike, which might suggest that they achieved a significant breakthrough. Additionally, the fact that some team members left without providing detailed reasons for their departure fuels this speculation.
What does the term 'blackbox problem' refer to in AI?
-The 'blackbox problem' in AI refers to the lack of transparency and interpretability in modern AI models, particularly deep learning models. These models are so complex that even their creators cannot fully understand their inner workings, which poses a risk as unintended behaviors may emerge without the ability to predict or control them.
What is the potential timeline for achieving AGI and ASI according to some experts and companies?
-According to some experts and companies, AGI could be achieved by 2029, with a 15% chance predicted for 2024 and an additional 15% for 2025. ASI, being a step beyond AGI, might be achieved shortly after AGI, potentially within a year, if the AGI system is robust and capable enough.
Why is there a concern about the 'Winner Takes All' scenario in the race for AGI?
-The 'Winner Takes All' scenario is concerning because the first entity to achieve AGI could use it to rapidly advance and create ASI, thereby gaining an insurmountable lead over competitors. This could lead to a monopolization of technology with significant economic and geopolitical implications.
What is the role of compute power in the development of AGI and ASI?
-Compute power is crucial for the development of AGI and ASI as it allows for the training of increasingly complex and capable AI models. With more compute power, companies can experiment with larger datasets and more intricate algorithms, pushing the boundaries of what AI can achieve.
How does the departure of key personnel from OpenAI's Super AI team impact the field of AI safety?
-The departure of key personnel can impact AI safety as these individuals were at the forefront of research into aligning superintelligent systems with human values and goals. Their absence may slow progress in this critical area, potentially increasing risks associated with the development of AGI and ASI.
Outlines
đ Departure of Ilya Sutskever from OpenAI
The video discusses the recent news of Ilya Sutskever's departure from OpenAI, a significant event as he is considered one of the greatest minds in the field of AI. The host expresses sadness over the loss, acknowledging Sutskever's contributions to the company and his role as a guiding light. The video also introduces Jacob as the new Chief Scientist at OpenAI, highlighting his impressive credentials and the confidence in his ability to lead the company forward. Furthermore, it touches upon Sutskever's future plans, which he mentions as 'personally meaningful,' and the anticipation surrounding his next move in the AI community.
đ OpenAI's Superintelligence Alignment Challenge
This section delves into OpenAI's ambitious goal of solving superintelligence alignment within four years. It outlines the formation of a specialized team aimed at addressing the challenges of artificial superintelligence (ASI), which is expected to surpass human intelligence significantly. The video highlights the loss of key members from the superalignment team, including the resignation of Jan Leike, and speculates on the reasons behind these departures. It also discusses the potential implications of these changes for the future of AI safety and the progress towards aligning superintelligent systems.
đ€ AGI and ASI: The Future of AI Capabilities
The video explores the concept of artificial general intelligence (AGI) and artificial superintelligence (ASI), emphasizing the transformative impact these technologies could have on society. It discusses the iterative process of aligning successive generations of AI models and the potential for a virtual system to solve alignment problems for newer models. The speaker also addresses concerns about the alignment of superintelligent systems and the challenges that current and future generations of AI pose to researchers in the field.
đ Key Departures and the Impact on AI Safety
This part of the video script focuses on the recent departures of key personnel from OpenAI's Super alignment team and the potential consequences for AI safety. It mentions the firing of researchers and the concerns of former employees about the company's responsible behavior in the face of AGI. The video also discusses the speculative nature of whether the alignment problem has been solved and the broader implications of these departures for the field of AI.
đ§ The Black Swan Event: AGI's Imminent Arrival
The speaker expresses concerns about the potential for AGI to arrive sooner than expected, with predictions ranging from 2024 to 2029. The video highlights the exponential growth of AI capabilities and the belief that AGI could be just around the corner. It also touches on the responses from experts like Daniel about the magical capabilities of ASI and the dramatic shifts in the dynamics of power that AGI could bring about.
đ The Winner-Takes-All Scenario in AGI Development
The video discusses the competitive landscape of AI research and development, suggesting that the first entity to achieve AGI will have a significant advantage, potentially insurmountable for competitors. It emphasizes the demand for AI researchers and the importance of aligning AGI and ASI with human values to prevent misuse. The video also mentions the potential for immortality by 2030 due to advancements in longevity research and AI breakthroughs.
đš The Risks of Unregulated AGI Development
This section addresses the potential risks and regulatory challenges associated with AGI and ASI development. It raises concerns about the lack of understanding of how modern AI models work, likening the complexity of superintelligent systems to explaining economics to a bee. The video also discusses the alignment problem and the potential for a rogue ASI if training runs exceed expectations, highlighting the precarious nature of relying on hope to ensure AI safety.
đ€ Superhuman AI Systems: The Control Challenge
The video outlines the efforts of OpenAI's alignment team to address the challenge of controlling superhuman AI systems. It discusses the release of the team's first paper, which introduces a new research direction for solving superhuman models. The paper explores the possibility of using smaller, less capable models to supervise larger, more capable ones, as a means to understand and control future AI systems. The video also mentions the concerns about the focus and resources dedicated to safety research amidst the race to develop AGI.
đ° The Billion-Dollar Race for AGI Breakthroughs
The final paragraph discusses the massive investments being made by companies in the race to develop AGI and the recent technical breakthroughs that suggest timelines for achieving AGI are猩ç (getting shorter). It emphasizes the urgency and rapid pace of development in the field of AI, with companies like Meta committing significant resources to building general intelligence. The video concludes with a call to subscribe for updates on the latest AI news, reflecting the fast-evolving nature of the industry.
Mindmap
Keywords
đĄArtificial General Intelligence (AGI)
đĄArtificial Super Intelligence (ASI)
đĄAlignment Problem
đĄBlackbox Problem
đĄExponential Growth
đĄWeak to Strong Generalization
đĄOpenAI
đĄCompute
đĄElia Satova
đĄSafety Research
đĄScalability
Highlights
Ilia Sutskever, a key figure in AI, has left OpenAI and is moving on to a new project.
Jacob Steinhardt has been appointed as the new Chief Scientist at OpenAI.
Steinhardt's impressive background includes leading significant projects and contributions to deep learning systems.
Ilia Sutskever expressed confidence in OpenAI's future and shared his excitement for his upcoming, personally meaningful project.
OpenAI has been experiencing a loss of key team members, raising concerns about AI safety.
The concept of 'super alignment' is introduced, aiming to solve the core technical challenges of super intelligence alignment within four years.
Two key members of the super alignment team at OpenAI, including Ilya Sutskever, have left the company.
Jan Leike, head of alignment, resigned from OpenAI, leaving the super alignment team without its lead.
Leike's resignation sparks speculation about the status of the alignment problem and the future direction of AI safety at OpenAI.
Researchers speculate that the easier problem of aligning next-generation models like GPT n+1 might be closer to a solution.
Daniel Hendrycks, a prominent voice in AI safety, left OpenAI citing concerns about responsible behavior around AGI.
Hendrycks' departure indicates a potential lack of confidence in OpenAI's approach to AGI development.
OpenAI's focus on safety research is questioned, with only a fraction of resources dedicated to alignment and control.
The potential for a 'winner-takes-all' scenario in AI is discussed, emphasizing the importance of being the first to achieve AGI.
The rapid pace of AI development suggests that AGI could be achieved within the current decade.
The transition from AGI to ASI (Artificial Super Intelligence) is expected to be swift, with significant implications for humanity.
The potential for immortality by 2030 is mentioned due to advancements in longevity research and the possibilities of ASI.
Concerns are raised about the 'black box' problem in AI, where the inner workings of AI models are not fully understood.
OpenAI's strategy for aligning future AI systems using each generation to inform the next is described.
The risks of creating a rogue ASI that does not align with human values or interests are acknowledged.
Transcripts
so I'm just going to get straight into
this video because there's no point in
wasting time I think certain parts of
open aai are truly starting to fall
apart and this video might be a long one
but trust me every single slide has been
carefully created so that you guys can
understand all of the information and
how you can see here that one of the
main things that we got today was the
news of Ilia satova and open AI this was
something that a lot of people were
actually waiting on because we hadn't
heard from IIA pretty much since
November and even in recent interviews
Sam Alman consistently refused to speak
upon what Ila's status was at opening
eye but we finally have the news he says
that Ilia and open ey are going to part
ways this was very sad to me Ilia is
easily one of the greatest minds of Our
Generation a Guiding Light of our field
and a dear friend his Brilliance and
vision are well known his warmth and
compassion are less welln but no less
important opening ey would not be what
it is today without him although he has
some personally meaningful he is going
to work on I am forever grateful for
what he did here and committed to
finishing the mission we started
together I am happy that for so long I
got to be close to such a genuinely
remarkable genius and someone so focused
on getting the best future for Humanity
Jacob is going to be our new Chief
scientist jackob is easily one of the
greatest minds of Our Generation and I'm
thrilled he's taking the Baton here he
has run many of our most important
projects and I'm very confident he will
lead us to make rapid and safe progress
press towards our mission of ensuring
that AGI benefits everyone so that
statement there clearly shows that Elia
satova is no longer working at open aai
and is going to be working on something
else and this is something that at least
we can say now we have some kind of
closure on where one of the greatest
mines in AI where they are going to be
now one of the things I did want to know
is I wanted to know that who is going to
be replacing Elia satova at open Ai and
that is of course jackob now essentially
if we look at who jackob is jaob is now
the chief scientist at open aai where he
has led transformative research
initiative since 2017 he has previously
served as director of research
spearheading the development of GPT 4
and open A5 and fundamental research in
large scale reinforcement learning and
deep learning optimization he has been
instrumental in refocusing the company's
Vision towards scaling deep Learning
System and jackob holds a PhD in
theoretical computer science from
Carnegie melan University so clearly
this is someone with a very very
impressive resume and clearly has all of
the necessary skills to take on the AI
Niche now as for Elia satova something
that I know many people have been
wondering is what did he say and he
posted this tweet after quite some time
he said after almost a decade I have
made the decision to leave open AI the
company's trajectory has been nothing
short of miraculous and I'm confident
that open AI will build AGI that is both
safe and beneficial under the leadership
of Sam mman Greg Brockman miror moratti
and now under the excellent research
leadership of jackob it was an honor and
a privilege to have worked together and
I will miss everyone dearly so long and
thanks for everything I am excited for
what comes next a project that is very
personally meaningful to me about which
I shall share details in due time so
clearly Ilia seems to have left on good
terms despite the entire tumultuous
period that was the firing of Sam Alman
but I think one of the most interesting
things that most people are looking
forward to now is of course what comes
next so it says that he is excited for
what comes next and it's a project
that's very personally meaningful to me
about which I will share details in due
time so whatever Ilia satova is going to
do next I'm guessing that we will
receive an update I guess maybe a few
weeks could be a few months I have no
idea but it seems that we're going to be
getting an update sometime in the near
future now essentially there was also
this Peach whilst things on the surface
might look like opening eyes completely
fine the following slides that I'm about
to show you do showcase an entirely
different picture because open AI has
been losing key members of their most
important team in regards to AI safety
and I'm about to break this all down for
you because when I started to do the
research on this I was like wow things
are starting to look a little bit
pessimistic in terms of AI safety now
Ilia also did tweet this pict picture
with the rest of the open AI team but I
do remember that during December there
were some tweets by IIA satova where
there were some tweets that were vaguely
worded in a way that kind of implied
that openai was in a toxic work
environment he said that I learned many
lessons this past month one such lesson
is that the phrase the beatings will
continue until morale improves applies
more than than it has any right to and
this of course could be just dubious
speculation with regards to what he was
talking about but during the time that
this was tweeted it could be argued that
it was only related to one key event and
that of course was open AI at that time
but there was not really many statements
in addition to this because of the
secrecy of why Sam one was fired now
here's where things get really really
crazy most people don't know about super
alignment because it's something that is
in the future so if you don't know what
super alignment is this is opening I's
plan to solve super intelligence and
that is basically a system that is much
better than AGI so it says our goal is
to solve the core technical challenges
of super intelligence alignment in 4
years and basically what they did was
they decided to build a specific team to
solve this specific problem because they
knew that in the future they're going to
have AGI and after AGI comes ASI which
is artificial super intelligence and
super intelligence is quite hard to
explain but just think of it like this
if AGI can do a better task than any
human at pretty much everything and it's
going to be everywhere artificial super
intelligence is going to be able to do
things that you can't even fathom it's
going to be able to create new knowledge
do new research discover cures for
certain incurable diseases at the moment
it's pretty much going to feel like a
magical time if we manage to get super
intelligence right now essentially with
super intelligence you've got the
problem because you're Building A system
that is that smart it could go Rogue and
if a super intelligence goes Rogue we
literally don't stand a chance because
if a system is super intelligent it's
going to be able to outsmart us and
we're not going to be able to understand
what its goals are or even what it's
doing and you can see here that they
said that this might not even work it
says whil this is an incredibly
ambitious goal we're not guaranteed to
succeed we optimistic that a focused
concentrated effort can help solve this
problem there are many ideas that have
shown promise in preliminary experiments
and we have increasingly useful metrics
for progress and we can use today's
models to study many of these problems
empirically so here's where things start
to fall apart for open AI okay here's
where things really start to you know
like the alarm bills really start to go
so elas atava okay has made this core
research his focus and will be
co-leading this team with Jan like the
head of alignment okay and it says
joining the team are researchers and
Engineers from previous alignment teams
as well as other researchers across the
company now the thing is okay this super
alignment team that was meant to figure
out how to solve the alignment problem
for artificial super intelligence okay
elas satava is now gone and Jan like
today actually quit quit Okay so two key
members of the super alignment team are
now gone you can see that earlier today
Jan like literally tweeted I resigned
okay now the thing that I think was the
most interesting about this was the fact
that Janik didn't say anything other
than I resigned remember when Elia
satova resigned he said this heartfelt
meage that you know shows that he kind
of cares about opening eye and where it
goes but Jan like stating that you know
I resigned with you know just no further
context I mean it leaves it open to
complete speculation with as to why he
did resign now like I said before this
is a problem because these were the
people that were trying to solve super
alignment but you have to understand
that this does actually get a lot worse
because they're not the only people that
resigned from Super alignment in terms
of the team now one of the things that
you need to know about super alignment
and a lot of people are starting to
speculate is that because the head of
alignment at Super alignment resigned
today maybe this means that they solved
the alignment problem okay in terms of
aligning a super intelligence so if you
don't believe this let's take a look at
what Jan like said in a recent interview
now I've listened to this it's over two
hours but this is the main bit that you
need to pay attention to so he says if
you're thinking about how you align the
super intelligence how do you align a
system that's vastly smarter than humans
I don't know I don't have an answer I
don't think anyone really has an answer
but that's also not the problem that we
fundamentally need to solve maybe this
problem isn't even solvable by humans
who live today but there is this easier
problem how do you align the system that
is in the Next Generation how do you
align GPT n+ one and that is a
substantially easier problem so it's
basically saying how do you align
currently GPT 5 then when you have GPT 5
how do you align gp6 when you have GPT 6
how do you align gpt7 so he's basically
stating that that is a much easier
problem and he goes into more detail
here and he says okay and this is
basically the highlighted part and so if
you get a virtual system to be aligned
it can then solve the alignment problem
for GPT n plus1 and then you can itly
bootstrap yourself until you're at Super
intelligence level and you figured out
how to align that so basically he's
stating that even if humans can solve
the problem of GPT n plus1 basically
humans shouldn't be doing that but a
virtual system as smart as human should
be doing that and then if you get that
system to be aligned it can then solve
the alignment problem for GPT n plus1
and then you can itely bootstrap
yourself until you actually at Super
intelligence if you're thinking about
like how do you actually align a super
intelligence how do you align the system
that's vastly smarter than humans I
don't know I don't have an answer I
don't think anyone really has an answer
but it's also not the problem that we
fundamentally need to solve right
because like maybe this problem isn't
even solvable by like humans who live
today but there's this like easier
problem which is like how do you align
the system that is the next Generation
how do you align GPD n plus1 and that is
a substantially easier problem
and then even more if humans can solve
that problem then so should a virtual
system that is as smart as the humans
working on the problem and so if you get
that virtual system to be aligned it can
then solve you know the alignment
problem for GPT n plus one and then you
can iteratively bootstrap yourself until
you you know actually you're like at
Super intelligence level and you figured
out how to align that and of course
what's important when you're doing this
is like at Ed each step you have to make
enough progress on the problem that
you're confident that gbd n plus one is
aligned enough that you can use it for
alignment research and he says of course
what's important when you're doing this
is that at each step you have to make
enough progress on the problem that
you're confident that GPT n plus1 the
next model whatever it is is aligned
enough so that you can actually use it
for alignment research so basically what
he's saying is that every time we go an
increase from GPT 4 to GPT or whatever
next Frontier Model is we have to make
sure that that model is so aligned that
we can then use that current model for
alignment research but this is why a lot
of people now have thought okay that if
two of the key members of the founding
members of super alignment have now left
okay remember these guys left they
didn't get fired these guys left that
means and remember he said I resigned
okay with nothing no further context
that means that maybe they actually
managed to solve this alignment problem
okay and if you think that it's not that
bad take a look at this this person
Leopold who used to work at Super
lineman at open ey he also no longer
works at open ey now he was actually
fired okay it says open ey has fired two
researchers for allegedly leaking
information according to a person with
knowledge of the situation okay Leopold
and someone else okay who was also an
ally of Ilia satova who participated in
a failed effort to force Sam mman last
fall and you can also see here it says
opening ey staffers had actually
disagreed on whether the company was
developing AI safely enough now what's
crazy about this okay is that the two
people that were fired recently at open
they were actually members of the super
alignment team so when you look at the
picture now you can see that in the
recent paper where openai were talking
about weak to strong generalization
eliciting strong capabilities with weak
supervision this is basically a paper
where they're trying to solve super
alignment and they showing how it can be
done I've crossed out four names here
because these four people no long work
at open ey we can see pavl Leopold Jan
like and Elia Sova none of these people
work at open ey anymore this is
basically a paper where they're trying
to solve super alignment and they're
showing how it can be done I've crossed
out four names here because these four
people no longer work at open ey we can
see pavl Leopold Jan like and elas Sova
none of these people work at open ey
anymore pavl who was a you know
researcher okay who was on the previous
paper and super AI alignment open AI he
actually now works at Elon musk's AI
company so it will be interesting to see
how his career also develops now what's
also crazy is that you might be thinking
okay just a few people from Super
alignment left that might not be that
crazy well that's not the truth as well
as I dug into more things I realized
that more people from open ey also left
as well so you can see right here that
it says open ey researchers Daniel and
William recently left the company behind
C chat GPT it says Daniel said on a
forum he doesn't think that open ey will
behave responsibly around the time of
AGI so this is a key reason and I think
it's important to understand that when
people leave we have to look at why they
leave okay if someone's leaving for you
know family reasons or personal reasons
that is completely different but if
someone's stating that they're leaving
this company because he doesn't believe
that they'll behave responsibly around
the time of AGI that is a key key key
indicator that maybe just maybe
something is wrong you can see here it
says Daniel was on the governance team
and Saunders were worked on super
alignment team at openi so this is five
people from the original Super alignment
team have now completely left open eye
and that is remarkable considering the
fact that since the super alignment team
was formed it was meant to solve super
alignment within 4 years but literally
five of the founding members have gone
and four of them that were on this
recent paper are no longer there and you
have to understand that super alignment
is one of the key things that we need to
solve if we're going to increase air
capabilities which is why many is
speculating that the GPT n+1 alignment
problem has been solved since it was a
substantially easier problem to solve
now one thing that did really concern me
was Daniel's actual post about magical
capabilities and other futuristic things
that artificial super intelligence can
do and this is something that I did
Cover in another video which I'm going
to include here now but the point is as
I'm about to explain to you how crazy
this is about to get guessing that if we
take a look at what we've seen here
which I'm about to explain and the fact
that members of the super lineman team
are just gone like literally five six
people from the team are now gone some
people are speculating whether or not
this problem has solved and now it's
literally just a compute problem and
we're just building data centers because
we already know how to get to AGI but
anyways if you want to take a look at
what Daniel said because he said he
doesn't believe that open a are going to
uh you know behave responsibly around
the time of AGI because it's a very
powerful tool that will be allowed to do
pretty much anything in terms of the
fact that it's going to grant them
immense power and will literally shift
Dynamics in certain countries I think
you need to take a look at how crazy
this document is what Daniel said before
he left open a eye because it was truly
eye opening to see what he thinks the
future is going to be like intense so
we're going to go through this list
because there actually is quite a lot of
things to talk about and a lot of things
that you do need to be aware of because
a lot of people that saw this list
thought about a few things but didn't
think about how the industry is going to
evolve as a whole so let me tell you why
this was actually genuinely a shocking
statement because there was one of them
that I saw and I was like okay that is a
super big deal and that just completely
changes my timeline so one of the things
that he did state is essentially
probably there will be AGI coming soon
and that's any year now and this is
something that unfortunately doesn't
surprise me if you've been paying
attention to this base you'll know that
we've had many different instances and
inklings of AGI and essentially many of
us do kind of feel like we're on the
edge of our seat because we know that
AGI is just around the corner now that's
for a variety of factors like the fact
that openi has been you know in a really
really strange position with it you know
delaying releases on multiple products
with it having the Sam Alman firing um
and other companies are also having
major breakthroughs and working on AGI
as well so um literally any year now we
could definitely be getting AGI not only
just because of the way that these
companies are but the significant
investment that they're also getting to
now he also uh spoke here and you can
see that he responded to someone's
question in terms of the percentage of
AGI so he said why do you have a 15%
chance for 2024 and only an additional
15 for 2025 now I do think we get AGI by
the end of 2025 or at least you know
some kind of lab makes an insane
breakthrough and we have AGI by the end
of 2025 that's just what I kind of
believe but he says do you really think
that there's a 15% chance of AGI this
year and he says yes I really do I'm
afraid I can't talk about all of the
reasons for this you know I work at open
AI but mostly it should be figureoutable
from the publicly available information
which we've discussed several times on
this channel now my timelines were
already fairly short 2029 is the median
which is essentially the most common and
when I joined openi in early 2022 and
the things have gone mostly as I've
expected I've learned a bunch of stuff
some of which updated me towards and
some of which updated me downwards as
for the 15 15% thing I don't feel
confident that those are the right
numbers rather those numbers express my
current state of uncertainty I could see
the case for making 2024 number higher
than the 2025 of course because
exponential distribution Vibes if it
doesn't work now then there evidence it
won't works next year and I could also
see the case for making 2025 year higher
of course projects take twice as long as
one expects due to the planning fallacy
so essentially he's stating that you
know 15% this year 30% chance next year
but of course he's saying that you know
he could be completely wrong now with
AGI a predictions of course it's
anyone's guess but um this prediction by
AR invest essentially is a good visual
media to kind of look at when you're
looking at how AGI is going to progress
in the next couple of years now
something I do want to say about this is
the exponential nature of things because
they also do you know take that into
account with the fact that essentially
it was predicted for the end of uh you
know the 2029 or 2030 which is where
many people have predicted it I'm not
going to get into the next point in a
moment which is really what just kind of
shocked me um but essentially you can
see here that every time you know as as
time is going down you can see that
we're going down like this way as the
for forecast era continues it seems that
by 2027 but of course we can see that
there are these huge drop offs where
technology kind of just keeps on
dropping off so of course it's like
exponential it's kind of like s-curve
growth where you kind of go up and then
you kind of plateau for a little bit and
then you kind of get that next boom once
there is that next thing we're kind of
seeing like an inverted S curve um on
that graph as well and I know I showed
this in a previous video just wanted to
show it again so you guys can visualize
where things are going so the median the
most common prediction is 2029 some
people are predicting next year and
there are a few small reasons why but I
definitely do believe that if anyone is
going to get to it it will be open AI
obviously because of um the kind of
talent that they have the kind of you
know researches that they have it's a
unique team and open AI researchers are
some of the most sought after Talent
like you know um essentially it's so
crazy that you know I think there was
someone recently that was hired from uh
Google that went to a different company
and then Google started paying the
person four times more just because AI
researchers are so in demand right now
because it's such a competitive space
that um there is one tweet that I do I'm
going to come back to again that I think
you guys need to understand that if
someone develops AGI I think you guys
have to understand that it's going to be
a Winner Takes all scenario because it's
only a race until one company reach
reaches AI once once that happens the
distance to their competitors is
potentially infinite and it will be
noticeable for example raising a quar
trillion of the US GP now of course some
people are stating that this is where
you know open AI has already achieved
AGI they're just trying to raise raise
compute because they realized that we're
going to need a lot more compute for
this kind of system but Others May
disagree and I kind of do agree that you
know potentially um they probably have
internal AGI but just need more compute
to actually bring the system to reality
because certain things they just simply
can't test because they're trying to run
GPT 4 they're also trying to run some
other systems like Sora they're also
trying to to to give some of their
compute to Super alignment um so that is
a thing as well now this is the
statement that really did Shock Me Okay
um and this is why uh I my timelines got
updated because it changes everything
okay so it said he says uh probably
whoever controls AGI will be you will be
able to use it to get to artificial
super intelligence shortly thereafter
maybe in another year give it or take a
year now you have to understand that AGI
is essentially a robot that is
apparently as good as all
humans pretty much any task okay so
pretty much any task you can think of um
that can be done in a non-physical realm
and AGI is going to be able to be better
than 99% of humans okay according to um
deep mined levels of AGI paper and
essentially right now we have gbt 4
which is a lowlevel AGI so when we do
get that AGI system it's going to
accelerate everything because it means
that you know we can just duplicate
researchers we can duplicate
mathematicians we can duplicate people
um doing a whole bunch of stuff okay and
essentially this is crazy because
artificial super intelligence is
completely Next Level artificial
superintelligence is an intelligence
that is so smart that it will be able to
just make consistent breakthroughs and
it's going to fundamentally change our
understanding of everything we know
because because it's going to be that
smart okay and that is of course a
problem because of course there's
alignment problems and stuff like that
but the problem is is that artificial
super intelligence is something that
people didn't really even talk about
because it's so it's seemingly so far
away but they're sating that whoever
controls AGI will be able to use it to
get to ASI shortly after so if it's a
true AGI like a really good one um
getting to ASI won't take that long and
that is a true statement and something
that I didn't think about that much but
it's crazy because super intelligence
open I have openly stated that um super
intelligence will be the most impactful
technology Humanity's ever invented and
could help us solve many of the world's
most important problems but the vast
power of superintelligence could also be
very dangerous and it could lead to
disempowerment of humanity or even human
extension and it states that while super
intelligence seems far off now we
believe it could arrive this decade and
that's why this is kind of shocking
because open eye are saying that you
know okay some people think that AGI is
going to be by 2029 but they're stating
that not AGI by 29 2029 we state that
super intelligence could be here by the
end of this decade so super intelligence
could be here which means that you know
if we take a look and we kind of like
look at the actual you know data and we
think okay what's actually going on here
we could get AGI realistically by 2026
then we could get AGI by 2029 that's
something that could happen due to the
nature of exponential growth and these
timelines and open AI stated that
themselves so that's why they're also
actually working on that kind of
alignment because they know that it is
very very soon now in addition if you
want to talk predictions you have to
call on Rayo as well essentially he's a
futurist and he has made a lot of
predictions 147 and there's an 86% win
ratio I guess whatever you want to call
it now of course some people have you
know debated whether or not this ratio
is as high as he claims but um I would
say that his predictions have come true
a decent amount now essentially his
prediction on AGI is that artificial
intelligence will achieve human level by
2029 which is once again still going to
be pretty crazy even if it does happen
at 2029 because if we take a look at it
because I remember Elon Musk stated that
by 2027 everyone's timelines is getting
shorter and shorter by the day and we do
know that if we take a look at what's
actually going on right now um if we had
AGI within 2 years it's something that
genely wouldn't surprise everyone
especially with what we saw with s and
now another thing about um you know Ray
kwell that he actually stated that was
actually quite shocking and this is why
um I don't think you guys understand the
kind of world that we could be living in
if we actually do get AGI and then ASI
is because um he's stating that you know
there's a possibility that we might
achieve immortality by the year 2030 and
that's because of course we are like
doing well in terms of you know
longevity research and that kind of
stuff but if we do have artificial super
intelligence it's going to allow us to
do a lot of things like a lot of
breakthroughs that are just going to
completely change everything and that's
why this is so shocking because I didn't
realize that it could only take a year I
I don't know I mean I think that maybe
people aren't thinking about things such
as you know the actual compute the
actual you know laws in place that might
try to regulate this kind of stuff into
the ground the kind of uh maybe there's
going to be some kind of I guess you
could say Financial crashes or
essentially other things that could
potentially stop this but provided
everything is smooth like like there's
no you know Black Swan event there's no
like Bubonic plague the world doesn't
need to go into a shutdown and AGI
research isn't kind of delayed um ASI by
the end of the decade is a pretty scary
thing to think about okay and that is
why I stated that this genuinely did
shock me and one of the craziest things
as well like I said I was going to come
back to this okay how on Earth do other
companies catch up like I think think
about this okay so let's say your
opening eye okay you are working on
artificial general intelligence you do
it one day you wake up your researchers
you know your whole team is like look
we've done it we've achieved AGI we've
benchmarked it on all of this it's 99%
on this on that and that and that um
we've done it we've achieved AGI boom
okay how on Earth do other companies
catch up because the moment you get AGI
you can use it to I guess you could say
get towards ASI and you know you
immediately get like your company your
company just scales like 10x overnight
or even 100x overnight because all you
need to do is get the AGI to be able to
do certain things and you know it's
going to be relatively cheap to you in
terms of hiring another person that
you'd have to pay like a million a year
with open AI you could essentially have
these super powerful researchers doing
tons and tons of alignment research you
know ASI research and your company could
get an additional 100 employees every
day as long as you're scaling with
compute how on Earth do other AI
companies catch up to a company that's
basically achieved escape velocity and I
don't think they will like I genuinely
don't think that other companies will
catch up unless they quickly unless you
know somehow it leaks and the agit Tech
is you know widely distributed and then
of course you know when I say AGI Tech
I'm actually talking about the fact that
the agite tech is going to be the
research papers and the research behind
it not you know opening eye giving you
restrained access like they do with gbt
4 because the verion that we even get a
very very nerfed down model to what the
raw capabilities of the models offer so
essentially um you know this is some of
anthropics pitch deck when they wanted
to raise money in 2023 and they
basically said that we believe that
companies that train the best 2025 to
2026 models will be too far ahead for
anyone to catch up in subsequent Cycles
so um and if you don't know who
anthropic are they're a big AI company
that is kind of competing with openi
some of the open AI researchers did
leave to create anthropic because they
wanted to focus on safety but the point
here is that I don't think they catch up
and it does make sense if you have a
company that has AGI they have arguably
the best technology in in the last 20
years and with that they can grow their
company exponentially so I don't think
people catch up I think it's just you
know they're going to be so far gone
that it's going to be pretty crazy to
see what happens and I think the reason
that this is a thing is because this is
why people are stating that openi have
achieved Ai and they're currently using
it to develop things like Sora and stuff
and if that is true it kind of does make
sense cuz Sora definitely blew my hat
off like it's just like whoa like even
as someone who looks at AI all the time
when I saw that I was like whoa okay I
didn't think we were that close but um
yeah it's it's definitely pretty crazy
and um it it goes on okay so here it
states Godlike Powers okay so it says
probably whoever controls ASI listen to
this this is the craziest bit that I was
reading this and I was like is this even
real am I even living in a reality right
now it says probably who whoever
controls artificial superintelligence
will have access to spread to a spread
of powerful skills and abilities that
will be able to build and wield
technologies that seem like magic to us
just as modern tech would seem like
magic to medievals this will probably
give them Godlike Powers over whoever
doesn't control ASI so that brings an
important question do you think open AI
let's say they have ASI they have it
aligned do you think open AI are going
to distribute ASI or are they just going
to you know patent all the Technologies
as a kind of subsidiary of open as of
open cuz if they have ASI and nobody
else has it that's going to be the most
valuable thing on the planet and if
they're able to distribute cure if
they're able to distribute you know new
technology I mean that's going to make
the company super super super valuable
because like it states here they're
probably going to give them Godlike
Powers over anyone who doesn't control
ASI because that level of smartness is
unfathomable like it's very hard to
conceptualize how smart it is because
according to several reports and you
know researchers and stuff like that
it's basically like trying to explain
economics to essentially a b like you
know a b the thing that buzzes around
try to explain economics to that it it
it's it's very hard to conceptualize how
you would even begin to explain that to
a be I mean first you'd have to teach it
English then you'd have to teach it so
many other different concepts and um
that is going to be something that is
pretty pretty crazy I mean I mean trying
to even teach it abstract context so um
whilst this does seem good and whilst
you know Godlike powers and stuff like
that and you know which is why all these
companies are racing to achieve AGI
because they know once that is there
it's like you gain an instant 100x speed
boost in this kind of race the problem
is is that there is the blackbox problem
and a lot of people are starting to
forget about this problem as we Edge
closer and closer towards the edge of
this huge Cliff that we could be on um
is the fact that it states in general
there's a lot we don't understand about
modern deep learning modern AIS are
trained not built SL programmed we can't
theorize for example that they are
generally robustly helpful and in and
honest instead of just biing their time
we can't check so the problem here is
that um we don't know how these AR
models work we actually don't know
what's inside them we don't know how
everything is going together it's not
like you write a code and you understand
exactly how the code works this is not
how these AI models are going to be and
in the future um it's going to be a
bigger problem because if we're you know
growing an AI which is what some
researchers have claimed which is
essentially that would be a more
accurate description if we're doing that
how on Earth are we then going to
understand really um these even super
intelligent systems if we don't really
understand the ones we have now so um
it's pretty crazy it's it's generally
like like I'm trying hard to put it into
words but it is a very very giant
problem that people are trying to solve
and of course um here's we have this
okay so the alignment problem further
currently no one knows how to control
artificial super intelligence which is
true and they are working on it this is
what open ey is currently working on and
it says if one of our training runs
turns out to work way better than we
expect we'd have a rogue artificial
super Intelligence on our hands and
hopefully it would have internalized
enough human effects that things would
be okay and that's a crazy statement I
don't care what you say that is insane
because he's basically saying that look
if our training runs work out to be
better than we expect unfortunately
we're going to have a rogue ASI on our
hands because we don't know how to we
don't know how to align it they're
basically saying that look if we train
the next model and it's super smart or
artificially super intelligent which I
don't think it will be I do think that
you need a ton of compute just like how
you skilled things up with before it
says hopefully we're just we're just
basically just hoping that it's not
crazy okay um and that is quite scary
that you know hope is mentioned here so
it says there are some reasons to be
hopeful about that but there are also
some reasons to be pessimistic and the
literature on the topic is small and
preag dtic which is of course true then
of course we have um Sam Alman which is
a great clip which you guys should take
a look at because he actually talking
about um the alignment problem is like
we're going to make this incredibly
powerful system and be really bad if it
doesn't do what we want or or if it sort
of has you know goals that are uh either
in conflict with ours um many Sci-Fi
movies about what happens there or goals
where it just like doesn't care about us
that much and so the alignment problem
is how do we build AI that that does
what is in the best interest of humanity
how do we make sure that Humanity gets
to determine the you know the future of
humanity um and how do we avoid both
like accidental misuse um like where
something goes wrong that we didn't
intend intentional misuse where like a
bad person is like using an AGI for
great harm even if it that's what the
person wants and then the kind of like
you know inner alignment problems where
like what if this thing just becomes a
creature that views this as a threat the
the way that I think the self-improving
systems help us is not necessarily by
the nature of self-improving but like we
have some ideas about how to solve the
alignment problem at small scale um and
we've you know been able to align open
ai's biggest models better than we
thought we we would at this point so
that's good um we have some ideas about
what to do next um but we cannot
honestly like look anyone in the eye and
say we see out 100 years how we're going
to solve this problem um but once the AI
is good enough that we can ask it to
like hey can you help us do alignment
research um I think think that's going
to be a new tool in the toolbox so
essentially In that clip Sam mman
Actually does talk about how they're
going to use AI an internalized version
of maybe an a AGI or narrow AI That's
able to really really understand how to
align um these AI systems and of course
he does talk about the fact that you
know we could have an AI that just you
know eventually evolves into some kind
of creature that just you know does its
own thing and that's pretty scary coming
from someone who's the CEO of a major
company that is building some of the
most impactful technology that we will
have in our lifetimes and essentially of
course here we talk about the best plan
and it says our current best plan
championed by the people winning the
race to AI is to use each generation of
AI systems to figure out how to align
and control the Next Generation and this
plan might work but skepticism is
warranted on many levels so open AI did
actually talk about um their approach to
this and I think it's important to
actually look at this because their goal
is to build a roughly human level
automated alignment researcher and then
basically saying that we can then use
vast amounts of compute to scale our
effort and itely align super
intelligence super Intelligence being
that crazy Smart Level AI system that's
going to have goals beyond our
understanding and essentially they're
saying to align the first automated
alignment researcher we're going to need
to develop a scalable Training Method
validate the resulting model and stress
test the entire alignment pipeline so of
course they're going to do adverse
serial testing where essentially they're
going to test the entire pipeline by
basically stating that you know they're
going to try to just see what kind of
goes wrong but in a kind of sandbox
environment and of course try to like
like detect how things would go wrong so
um I'm guessing that this is you know
one of their approaches and of course uh
they've shown that this kind of does
work so essentially there's a thing
called weak to strong generalization
eliciting strong capabilities with weak
supervision so I'm going to show you
guys that page now and essentially here
you can see they talk about the super
intelligent problem and of course super
intelligent is a big problem and this is
actually pretty recent which is uh quite
interesting this was December the 14th
2023 so around 2 three months ago they
said we believe super intelligence and
AI vastly smarter than humans could be
developed within the next 10 years
however we don't know how to reliably
steer and control superhuman AI systems
so solving this problem is essential for
ensuring that even the most advanced AI
systems are beneficial to humanity just
going to zoom in here we formed the T
alignment team earlier this year to
solve this problem and today we're
releasing the team's first paper which
introduces a new research Direction um
for officially solving superhuman models
so basically they state that you know
future AI systems will be capable of
extremely complex and creative behaviors
that will make it hard for humans to
basically look over them and watch and
understand for example superhuman models
may be able to write millions of lines
of code potentially dangerous computer
code that will be very hard for even
expert must to understand so essentially
they made this kind of setup here and
with this setup they say um to make
progress on this core challenge we
propose an analogy we can empirically
study today can we use a smaller less
capable model to supervise a larger more
capable model so you can see here we've
got traditional machine learning where
we have the supervisor looking at
student which is not as smart as them
but it isn't too vastly smarter then we
have super alignment which is um of
course you know where essentially the
human researcher is trying to supervise
a student that is way smarter than it
that's where you can see the robot
that's just a Above This human level you
can see here in this diagram and it's
like you know how on Earth is that
supposed to work what they're trying to
do is like look okay if we can get you
know a smaller robot a smaller AI system
to supervise a larger AI system that's
beneath human level hopefully we can
scale this progress and then when we get
to this level of super alignment
hopefully that thing kind of works and
essentially what they did was they did
this they said when we supervise GPT 4
with a gpt2 level model using this
method on NLP tasks the resulting model
typically performs Somewhere Between
gpt3 and GPT 3.5 and it says we were
able to recover much of GP gp4s
capabilities with only much weaker
supervision so it says this method is a
proof of concept with important
limitations for example it still doesn't
work on chat GPT preference data however
we also find Signs of Life with other
approaches such as optimal early
stopping and bootstrapping from small to
intermediate to large models so
essentially um this is just their first
paper on kind of thinking how on Earth
they could even you know try and solve
this but I do think that this is
something that is important now of
course this is the problem okay it says
for for one thing there is an ongoing
race to AI with multiple Mega
corporations participating and only a
small fraction of their compute and
labor is going towards alignment and
control research and one worry is that
they aren't taking this seriously enough
now basically you know the the slide
just before there if you saw what
opening I said uh I'm not sure on that
page somewhere opening eyes said that
20% of their overall compute is going to
Safety Research which does make sense
because guys if you haven't you know
heard of the elephant in the room the
elephant in the room is that essentially
if these uh super intelligent systems
don't work out um we all die and of
course you might be thinking how on
Earth do we all die I could play a clip
but essentially you just have to think
about it like this okay um you know how
ants right just you know walk around
they do their thing um imagine if an ant
created a human and then humans start
creating highways as a result of humans
creating highways uh we destroy ant
colonies because we need to remove their
environment in order to place down a
highway we need to place down homes and
we just see ants as a minor
inconvenience and because of that um of
course ants die in the process and some
people are speculating that this is
going to be the same with artificial in
elligence and we have no idea if this is
going to be true or not because the only
way to find out is to do it and if we do
it and we will die then I guess we're
never going to really know because we're
all dead so as horrible as that is the
point I'm trying to make here as well is
that all these companies are now placing
their chips on AGI because they've
realized that yo this is this next
technology whoever holds this key is
going to pretty much control um I think
a lot of the world's resources because
if you have an intelligent ASI system
and you just ask it you know how do we
become the most valuable company in the
world it's it's going to get it right
like I mean if it's smarter than us it's
going to get it right so however long
it's going to take um that's going to be
an interesting thing so meta's going all
in this is Mark Zuckerberg stating that
you know his company's just going all in
AI hey everyone today I'm bringing
meta's two AI research efforts closer
together to support our long-term goals
building general intelligence open-
sourcing it responsibly and making it
available and useful to everyone in all
of our daily lives it's become clearer
that the next generation of services
requires building full general
intelligence building the best AI
assistants AIS for creators AIS for
businesses and more that needs advances
in every area of AI from reasoning to
planning to coding to memory and other
cognitive abilities this technology is
so important and the opportunities are
so great that we should open source and
make it as widely available as we
responsibly can so that way everyone can
bet we're building an absolutely massive
amount of infrastructure um to support
this by the end of this year we're going
to have around 350,000
Nvidia h100s or around 600,000 h100
equivalents of compute if you include
other Jeep we're currently training
llama 3 and we've got an exciting road
map of of future models that we're going
to keep training responsibly so that
just shows you that all of these
companies are truly just pouring
billions of dollars into this and the
crazy thing is is that they're making
breakthroughs okay it's not just like
they're doing this just for fun these
guys are making breakthroughs you can
see that recently they made a technical
breakthrough this isn't meta by the way
this is a company private company called
Magic that could Ena enable active
reasoning capabilities similar to open
qstar model which was apparently a crazy
crazy breakthrough and this is why I
state that timelines are getting shorter
and shorter we have people stating you
know crazy crazy things you know and of
course this is once again brings us back
to the mullock problem which is
essentially if AGI is going to be any
year now and if of course you know
timelines are getting shorter because
whoever controls AGI is going to be able
to get to ASI shortly thereafter we have
this problem of you know Safety Research
being an issue and of course you know
some people even left open AI you know
and the people who who made anthropic
you know Dario amod who left open AI to
start anthropic because he wanted to
focus on safety they even recently you
know did a paper on sleeper agents I
might include a clip from the video
where I talked about that and why that
was really bad and why everyone missed
the mark on that and some people were
stting that oh you know Lal this is just
dumb um but essentially we do have a
problem on our hands because the
timelines every day seem to be getting
shorter and shorter whether it be an
open a employee whether it be you know a
company making a private breakthrough
that enables um you know active
reasoning I think it it's not smart to
underestimate the fact that AGI will be
used to get to ASI shortly thereafter
and this statement okay the fact that
you know whoever controls ASI will have
access to a powerful skills and ability
that will seem like magic to us um just
like modern tech would seem like magic
to medievals isn't to be underestimated
because if we like think about it like
this okay this is why super intelligence
is so crazy like if we go back to for
example you know when they just had
castles and you know the medieval times
or whatever if we just go back to that
time and if you know we ask them how
would you defeat this Army in the future
future okay let's say how would you
defeat this Army in the future they
would say oh we'd get our cannon balls
we'd get our bow and arrows and we'd be
able to defeat them but they wouldn't
because we'd have tanks and we'd have
planes and we have this advanced level
of technology that would just simply
destroy anything that they'd ever have
and that's a problem with artificial
superintelligence if you're trying to
think of something that is very hard to
conceptualize so um I mean all of the
current Tech that we do have like if you
saw an iPhone you brought it back 100
years it would seem like magic like if
you saw a drone it would look like magic
I mean it's pretty crazy okay um and we
do know that that is a 100 years ago
without artificial super intelligence so
you can imagine how crazy things are
going to look like I mean I genuinely
can't even imagine to believe what the
future is going to look like are we
going to be you know all IM Immortal and
I mean how is the timeline going to be I
think either one thing happens either it
comes either faster than we think or
later than we thinks I don't think it
comes on time because I do think there's
always certain factors that people
aren't thinking about and of course who
knows maybe we'll hit a wall maybe you
know AGI doesn't come later down the
line because we figure out that you know
there's some kind of war that we can't
get past and requ
more years of breakthroughs and you know
gb4 we're kind of stagnant at that for a
bit but um it will be interesting to see
where we do go and how these timelines
do evolve because um things are moving
rapidly and if you did enjoy this um
it's important to subscribe to the
channel because uh every day I release a
video on the most important and most
pressing AI news that you need to be
aware of
Voir Plus de Vidéos Connexes
Il Futuro dell'Intelligenza Artificiale: I 5 Livelli Secondo OpenAI - Scopri Dove Siamo! #1293
How to get empowered, not overpowered, by AI | Max Tegmark
Ex-OpenAI Employee LEAKED DOC TO CONGRESS!
AGI Before 2026? Sam Altman & Max Tegmark on Humanity's Greatest Challenge
Stunning New OpenAI Details Reveal MORE! (Project Strawberry/Q* Star)
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
5.0 / 5 (0 votes)