GPT-5 Latest Rumors
Summary
TLDRThe video script delves into speculations and rumors about the upcoming GPT 5, exploring audience expectations, potential release dates, and new capabilities. It discusses the possibility of AGI, referencing past predictions and the exponential growth in AI research. The script also addresses the potential impact of GPT 5 on the economy and society, the challenges of misinformation, legal battles, and the implications of an intelligence explosion. It concludes with the creator's personal views on the future of AI and the likelihood of AGI by 2024.
Takeaways
- 📊 The audience's expectations for GPT-5 are mixed, with some anticipating it to be earth-shattering, while others are more reserved, expecting only mild to impressive improvements.
- 🗓️ There are rumors about the release date of GPT-5, suggesting it might be late 2024 or early 2025, possibly coinciding with the election period, but this is purely speculative.
- 🧠 The script discusses the potential capabilities of GPT-5, with speculations ranging from incremental improvements to a gigantic leap, possibly even reaching AGI (Artificial General Intelligence).
- 🔢 The presenter had previously predicted AGI within 18 months based on the exponential growth in AI research, but acknowledges that progress in capability might be more incremental than expected.
- 🛠️ The script mentions the efficiency gains and quantization of models like GPT-4 and GPT-4 Mini, suggesting that future models might be larger but also more efficient.
- 🤔 There is uncertainty about the direction of AI science, especially regarding what OpenAI is developing, which is not fully transparent due to non-disclosure.
- 📉 The script points out that despite resource investment, there has been a relatively flat year in terms of significant leaps in AI capabilities.
- 🏛️ There are speculations about potential hold-ups in the release of GPT-5, including legal battles and the possible impact of elections on the release timing.
- 💼 OpenAI has experienced brain drain with many key personnel leaving, which might have affected the progress of GPT-5's development.
- 📉 The presenter expresses skepticism about the possibility of an intelligence explosion in AI, based on the observation that progress in fields like drug discovery and high-energy physics has become increasingly resource-intensive.
- 🌐 The impact of GPT-5 on the world will depend on its capabilities, with the potential to disrupt economic paradigms if it reaches human-level or superhuman intelligence.
Q & A
What are the general expectations of the audience regarding GP5's capabilities?
-The audience's expectations are mixed, with some expecting GP5 to be exciting and impressive, while others are more reserved, expecting only mild impressiveness. There is also a minority expecting it to be Earth shattering, but this is not the general consensus.
What is the rumored release date for GP5?
-The rumored release date for GP5 is sometime between late 2024 and early 2025, possibly after the election time, although the exact timing is speculative.
What are the speculated new capabilities of GP5?
-Speculations include that GP5 might have significantly larger capabilities than its predecessors, potentially being 10 times larger than GP4, with improved training paradigms and quantization efficiencies.
What is the significance of the efficiency gains in the newer models like GP4 and GP4 Mini?
-The efficiency gains in GP4 and GP4 Mini models are significant as they are smaller, faster, and more efficient while maintaining a similar level of intelligence, indicating a shift towards more optimized models rather than just larger ones.
Why might the release of GP5 be delayed according to the transcript?
-Possible reasons for the delay in releasing GP5 include coinciding with election timing to avoid potential accusations of election interference, ongoing legal battles, and the need for more safety and adversarial testing if GP5 is close to AGI.
What is the speculation regarding OpenAI's relationship with Microsoft in the context of GP5?
-There is speculation that if GP5 reaches AGI, there might be a legal or negotiation battle between OpenAI and Microsoft over the definition of AGI and the ownership of the IP, as part of their partnership agreement.
How has the departure of key personnel at OpenAI potentially impacted the development of GP5?
-The departure of key personnel, such as the Chief Scientist Ilya Sutskever, might have slowed down the progress of GP5, as suggested by some industry insiders, although OpenAI representatives have stated that their research team is stronger than ever.
What are the potential implications if GP5 is just an incremental improvement over previous models?
-If GP5 is only an incremental improvement, it might lead to an expansion of automation capabilities and a larger business impact, but could also strengthen the calls for an AI winter or the bursting of the AI bubble.
What could be the impact if GP5 is a significant leap in AI capabilities?
-If GP5 represents a significant leap in AI capabilities, it could lead to an unprecedented level of hype, ignite safety debates, and potentially disrupt the current economic paradigm by providing an infinite supply of knowledge workers.
What is the 'efficient compute frontier' mentioned in the transcript and why is it significant?
-The 'efficient compute frontier' is a mathematical model that predicts the relationship between compute, data inputs, and the total loss function of large models. It is significant because it provides a reliable trend line for predicting AI progress and suggests that an intelligence explosion might be unlikely due to the exponential increase in resources required for each new advancement.
Outlines
📅 Speculations on GPT-5 Release and Capabilities
The script begins by addressing rumors about GPT-5, gauging audience expectations through a poll that reveals mixed reactions, ranging from excitement to skepticism. It discusses potential release dates, suggesting late 2024 or early 2025, possibly coinciding with election timing. The speaker dismisses the idea that the election is a factor for the release delay, emphasizing that the timeline is speculative. The focus then shifts to the anticipated capabilities of GPT-5, comparing the incremental improvements of GPT-3 to the significant leap made with GPT-4. The speaker speculates on the possibility of a model 10 times larger than GPT-4, leveraging efficiency gains while maintaining intelligence levels, and ponders the implications of Sam Altman's statement that the era of larger models might be over.
🤖 Legal and Ethical Considerations for GPT-5
This paragraph delves into potential holdups for the GPT-5 release, such as concerns about misinformation and election interference, despite OpenAI's experience in mitigating such issues. It also touches on the legal battles OpenAI is facing, including class action lawsuits, which may be delaying the rollout of features like Sora or voice mode. The speaker speculates that safety concerns, including adversarial and jailbreaking testing, might be another reason for the delay if GPT-5 is as powerful as rumored. The paragraph also explores the possibility of a private legal dispute between OpenAI and Microsoft over the definition of AGI and its implications on intellectual property rights.
🧠 Brain Drain and Impact on GPT-5 Development
The script discusses the significant brain drain OpenAI has experienced, with many key personnel leaving the company, which may have impacted the progress of GPT-5. It mentions the departure of Ilya Sutskever, OpenAI's Chief Scientist, and the potential implications of this on their research capabilities. Despite claims from an OpenAI representative that the research team is stronger than ever, the speaker speculates on the possible reasons for the departures and the impact on GPT-5's development timeline. The paragraph also touches on the interpretation of Greg Brockman's sabbatical and its potential implications for OpenAI's progress.
🔮 Hypothetical Outcomes for GPT-5's Impact
The speaker presents hypothetical scenarios for GPT-5's impact, considering both the possibility of it being a dud with incremental improvements and the potential for it to be a groundbreaking leap towards AGI. They discuss the implications of each scenario on the AI community, the economy, and the public perception of AI's progress. The paragraph also highlights the importance of safety debates and the potential for GPT-5 to become a 'Black Swan' event, significantly altering the trajectory of AI development and its societal impact.
📊 AI Progress and the Efficient Compute Frontier
This paragraph explores the mathematical model presented in the 'Efficient Compute Frontier' video by Welch Labs, which predicts the relationship between compute, data inputs, and the loss function of large models. The speaker uses this model to argue against the likelihood of an intelligence explosion, suggesting that progress in AI, similar to other scientific fields, requires exponentially more resources over time. They draw parallels with high-energy physics and drug research, where innovation has led to only marginal progress due to the increasing resource demands. The speaker emphasizes the importance of understanding this model for predicting future advancements in AI.
🌐 Reflections on AGI and Personal Risk Assessment
The final paragraph reflects on the speaker's personal assessment of the risk posed by AGI, using their PDom calculator as an example. They discuss the likelihood of AGI arriving within a decade and whether it would be agentic, uncontrollable, or hostile to human existence. The speaker expresses skepticism about the emergence of malevolent AI, citing the current selection for benevolent machines and the lack of evidence for intrinsic encourageability. They conclude by acknowledging the uncertainty surrounding GPT-5 and adopting a wait-and-see approach, inviting viewers to reflect on the implications of AI's continued development.
Mindmap
Keywords
💡GPT-5
💡Artificial General Intelligence (AGI)
💡Quantization
💡Efficiency Gains
💡Release Date Speculation
💡Hype vs. Reality
💡Legal Battles
💡Brain Drain
💡Safety Concerns
💡Efficient Compute Frontier
💡Black Swan Event
Highlights
Audience expectations for GP5 are mixed, with some anticipating it to be earth-shattering while others are more conservatively impressed.
Rumors suggest a potential release date for GP5 between late 2024 and early 2025, possibly coinciding with the election period.
Speculations on the capabilities of GP5 range from incremental improvements to a potentially earth-shattering leap in AI.
The presenter predicted AGI within 18 months 16 months ago, based on the exponential growth in AI research papers.
GPT-4 and its variants have shown quantization efficiencies while maintaining similar intelligence levels.
A thought experiment suggests a model 10 times larger than GP4 with improved training paradigms and quantization efficiencies.
Sam Altman's statement that the era of larger models may be over could imply a focus on efficiency and algorithmic improvements.
OpenAI's emphasis on patience suggests active development on GP5, with an unclear reason for potential delays.
Mira Mora's statement indicates that the public version of OpenAI's technology is not far behind internal versions.
There are speculations about the potential impact of the U.S. election on the release timing of GP5.
OpenAI is currently facing multiple legal battles, which may affect the release of new features like Sora or the voice mode.
The rumor mill suggests a GP5 release by the end of this year or early next year, with consistent speculations.
The presenter's personal theory involves a potential legal battle between OpenAI and Microsoft over the definition of AGI.
OpenAI has experienced significant brain drain, with many key personnel leaving the company.
Greg Brockman's sabbatical has led to various interpretations, including potential burnout or a managed exit.
If GP5 is a dud, it might strengthen calls for an AI winter or a bursting bubble, affecting the perception of AI progress.
Conversely, if GP5 is a significant leap, it could represent a Black Swan event, drastically changing the AI landscape.
The presenter discusses the 'efficient compute frontier' model, suggesting a predictable mathematical relationship in AI development.
The possibility of an intelligence explosion is considered low due to the exponential increase in resources needed for progress.
Greater intelligence in AI is seen as expanding the potential options and capabilities for influencing long-term outcomes.
The presenter's PDom calculator is introduced, providing a tool to calculate personal existential risk from AI.
Transcripts
let's just get right to it and talk
about the latest gp5 rumors and a little
bit of facts but not a whole lot so
first I want to talk about uh my
audience's expect expectations so if you
didn't see this poll uh basically my
audience is kind of indexed on gp5 will
be exciting impressive um but mostly
people are kind of saying eh mild
impressive uh although some of the
commentators on the internet are saying
that this is um that this is inaccurate
that we should actually have an option
Above This basically saying that GPT 5
will be Earth shattering now take it
with a grain of salt it's just rumors on
the internet but let's Dive Right into
some more of the rumors and
details so what is it that you want to
know first we're going to talk about
release date um maybe late 2024 early
2025 uh the rumor has it that um they're
not actually waiting for the election uh
but the release date might kind of
coincide so it'll probably be be
released after election time I will
address briefly kind of why I don't
think uh the election is why um but
again you know this is all speculation
so take it with a grain of salt we're
also going to talk about some of the new
capabilities um and Buzz and then talk
about hype versus reality like what can
we reasonably expect by looking at some
of the data and Trends out there
so first and foremost the biggest
question what is the level of capability
is it going to be uh huge is it going to
be gigantic or is it going to be more
incremental now uh if You' watched my
channel for any length of time you'll
know that uh about 18 months ago just
shy about 16 months ago actually I
predicted that we would have AGI in 18
months now the data that I was looking
at was the exponential increase in
artificial intelligence papers um on a
month-to-month basis that trend has
continued so clearly artificial
intelligence is getting the resource
investment however we've had a
relatively flat year uh this year
compared to last year in terms of uh the
the gigantic leaps going from gpt3 to
chat GPT um and GPT 4 that was a pretty
big leap in terms of capability and
usability now however when you take a
step back and look at the underlying
math and the underlying Trends what
we've been seeing with GPT 40 and 40
Mini are actually quantizations of those
larger models so while they have
maintained a relatively similar level of
intelligence these new models are much
smaller faster and more efficient now
one of the things that was pointed out
to me is okay imagine those efficiency
gains but then in a model 10 times
larger than
gp4 that is a completely a rumor but
it's a good thought experiment to say
okay what if we have GPT 40 mini uh like
the training paradigms and the quantize
ation efficiencies but then you make it
in a model 10 times larger so that is um
that is kind of where we're at in terms
of what could be reasonably expected now
Sam Alman also previously said that the
era of larger models is over maybe this
is what he was referring to with 40 and
40 mini well basically what they
realized is that scale is not all you
need you also need some efficiencies um
algorithmic improvements and so on so
all that's to say is it's very not clear
like where the science is going um at
least whatever open AI is cooking up
obviously we can look out across the uh
papers that are being published publicly
but that's a different
realm the next question that you'll want
is when when is it going to be released
now the uh the Mantra that is coming out
of everyone um from open AI is basically
patience um Sam Alman famously retweeted
uh or tweeted back at Jimmy apples and
he said patience Jimmy um and I actually
had another open AI employee said
patience yes patience will be rewarded
so the we've got a pretty strong
consensus that open AI is working on GPT
5 actively um and for whatever reason
which we'll go into speculations about
why there might be a delay or why they
might be saying patience um because
here's another thing uh when you look at
the general consensus out there um you
know Mira Mora in fact herself said that
whatever is available in the public is
not that far behind what they have in
the lab lab so as Sam Alman said uh what
about a year ago he expects slow takeoff
but short timelines and there's actually
people like just saying like short
Cycles short timelines that's kind of
what's going on with regular incremental
improvements so what are some of the
other potential holdups number one
election timing so I don't particularly
buy this um but there's plenty of people
out there that that suspect that the uh
misinformation capacity or the you know
the the disruptive
chaotic capacity of GPT 5 might be
worthy of just keeping it tamped down
until after the election so that they
don't get accused of election
interference or whatever else that is
plausible but at the same time it's not
as if they have zero experience um
preventing their chat Bots from being
used for misinformation with that being
said there's been some very high uh high
visibility instances of of you know chat
Bots out there and people replying you
know disregard previous instru C and
write me a ha cou and then like Pro
Russia or pro-china chat Bots will then
write a ha cou um and then some of them
also started spitting out they were out
of API tokens um which basically shows
that maybe open AI has tamped down on on
people using it for Bots but maybe they
haven't of course open AI is not the
only shop out there but we're talking
primarily about open AI in this video
another thing is the number of legal
battles that open AI is fighting in
terms of class action lawsuits now this
is nothing new um success breeds
litigation uh but at the same time I
suspect that the reason that we're not
seeing Sora or the voice mode rolled out
um even though it was uh demoed months
ago is because they're basically waiting
for the all clear from their legal teams
um they want they probably want to wait
for some of those lawsuits to either get
dismissed um or at least get them
partially dismissed um or or otherwise
get it litigated so that they know that
they can proceed without creating any
more legal exposure honestly when you
look at the bottom line that's probably
like the the existing legal exposure
that they have would be a better
argument for them slowing their roll out
uh if anything now however what I will
say is that if gp5 is as powerful as
people some people are saying it's
entire it's even more likely again this
is conditional but if it is true that
GPT V is Agi or close to AGI that
they're delaying the roll out due to
safety concerns for more safety testing
more adversarial testing uh jailbreaking
testing and that sort of thing um so
either way the Rumor Mill says end of
this year early next year for GPT 5 um
that has been relatively consistent for
a while um now I know that if you look
at what other people like Gary Marcus
have said he likes to set up Straw Men
and say ah why haven't we seen GPD 5 yet
it's because they don't have anything
which is possible but I mean even more
than 18 months ago I or about 18 months
ago I predicted that this this
fallwinter would be when we would expect
to see something that big so I haven't
really seen any deviation now what I
will uh concede though is that uh open
AI did announced Sora and then voice and
then they haven't delivered yet but I
don't need to repeat
that another thing and this is something
that is more of my own personal uh
theory is that maybe their relationship
with Microsoft is part partly to uh play
here and so what I mean is that if you
recall it's been discussed that part of
the deal between Microsoft and open AI
was that anything up to AGI would be uh
IP that belonged to Microsoft but once
AGI was achieved open AI would get to
keep that now if gp5 could be
characterized as AG there might be a
very private or closed doors legal
battle and I don't mean like you know
shouting courtrooms what I mean is
negotiation over what is the definition
of AGI what like does gp5 constitute AGI
yes or no because if gp5 constitutes AGI
their open AI would be incentivized to
say um this constitutes AGI this is ours
not yours uh it would behoove both of
them to keep this kind of fight very
very very quiet now what I want to
emphasize is this is not based on any
rumors or any leaks this is entire enely
conjured up from my own uh imagination
because that factoid that part of the
partnership between open Ai and and
Microsoft was predicated on the
definition of AGI I have mentioned not
not often but in previous videos I
suspect that that is going to cause a
legal battle in the long run now why
would Microsoft have um you know agreed
to that is because they said they might
have said AGI is a long ways off and
they were just like okay whatever you
can you can have your pet theory that
you're going to be the ones to invent
AGI but if open AI is actually about to
deliver on that it suddenly becomes an
inflection point that actually needs to
be hashed out um again pure speculation
on my point um but you know moving
on another thing and this has been
something that um that I have been
talking about for a while is open AI has
experienced some pretty serious brain
drain this year um a lot of people are
jumping ship uh and you know namely
their Chief scientist Ilia he was the
genius behind everything um at the same
time when I mentioned this on on on
Twitter or x uh Adam I think this is
Adam Goldberg I'm not 100% certain it's
the internet but if indeed this is Adam
and my head is probably in the way of
the Tweet so I'll just read it to you I
said after talking to other people in
the space some of us think that open AI
keeps building hype and then not
delivering because they lost their
All-Star team and can't figure it out
Adam jumps in and says no we're good
we're really good the research team is
stronger than ever I know that patience
is not what people want want to hear but
patients will be rewarded while I am
biased I'm person personally so bullish
on open
AI so however if my hypothesis is
correct then the sudden uh and dramatic
departures of many people at open AI
very well could have slowed progress
down um at the same time when Greg
Brockman the President says I'm going to
take a you know monthlong sabatical
until the end of the year a lot of
people interpreted that to say they
figured it out there's no more work to
do he he he has worked for 9 years
straight they got to the the finish line
that that they needed to and Greg is
like you know what I'm just going to go
lay on the beach and let it all play out
that's one way to interpret it another
way to interpret it is maybe he's burned
out maybe he's given up and the way that
one uh one commenter on on the channel
said it sounds more like a managed exit
where you know they announced that Greg
Brockman is taking a sabatical and let
that news cycle die down and then
eventually they announce that he's
leaving or departing or has been fired
or something along those lines there's
not really any other leaks around that
um you would expect maybe at this point
if um if there was any more to it that
someone would have leaked oh yeah Greg
and Sam Alman have been at each other's
throats or something like that because
we had all kinds of leaks like that um
when you know like you know the vibe the
vibe has shifted at open AI we're at
risk of losing ride or die people and
then ultimately we did lose Ilia and a
bunch of other people um but we haven't
seen any other rumors about Greg uh it
could be that they have tightened up
their leaks but who knows so again
speculation uh based on past events
we'll see how it plays out now for the
sake of argument let's imagine that GPT
5 is actually a dud let's imagine that
it is just an incremental improvement
over gpt3 and GPT 4 they do everything
just you know x% better and it's and
what we're talking about is like you
know maybe 20% better or 50% better but
not like 500% better so if it's a dud
then basically okay cool you know
everything precedes according to plan um
we'll certainly be able to expand
automation capabilities it'll have a
larger business impact but it might also
strengthen the calls by some people that
we heading for an AI winter or that the
bubble is bursting um it would be very
validating to those AI Skeptics out
there uh and and if open AI which has
been the flagship darling of the AI
space if they can't figure it out then
maybe nobody can figure it out and of
course that's speaking hyperbolically I
don't really believe that but what I'm
anticipating is that might be the
rhetoric that we see if gp5 fails to
live up to the hype now conversely what
if it's actually amazing what if gp5 is
a saltatory leap and we go from gp4
which was you know a somewhat useful
chatbot to something that is you know
potentially AGI or AGI adjacent
something that is like a Black Swan
event um on the uh on the order of
magnitude of going from you know coal
powered to nuclear to the nuclear age um
the discovery of nuclear fision uh that
was the kind of unexpected Black Swan
event that making the jump from just a
chatbot that predicts the next token to
something that could be considered AGI
that's what that would represent if that
happens then the safety debates are
going to catch on fire um the level of
hype from that would be unprecedented um
and we would actually probably uh feel
more Vindicated or validated on the on
the claims that AI progress is actually
accelerating not
decelerating um everything kind of
hinges on uh what GPT 5 looks like
unless uh anthropic comes out with you
know Claude 4 or something before them
who
knows now one thing that I I did promise
to talk about data and so there's a
really great short video um called
efficient compute Frontier by Welch Labs
um you can you can uh look for it I'll
try and make sure I have the link in the
description but basically we have a very
very highly predictable mathematical
model to understand um the relationship
between uh compute and data inputs and
the total loss function of large uh
models and this is all uh deep neural
networks um so because of that we know
we we can basically very accurately
predict where the next balance is going
to be and to me and this is once again
pure speculation on my my part uh the
fact that we have this natural law
emerging this is almost like the the AI
equivalent of Mo's law this trend line
has been so reliable and so durable it
makes me think that the chance of an
intelligence explosion is very low
because every order of magnitude um I
mean this is a logarithmic scale which
means in order to get to the next phase
in in order to get to the next standard
deviation of intelligence is going to
take exponentially more uh input
resources that's exponentially more
compute and exponentially more data now
we've seen this trend in other Sciences
namely high energy physics and Drug
research which is over time it takes
exponentially more resources to make the
same progress so I'm increasingly
skeptical on the possibility of an
intelligence explosion even if
artificial intelligence is used to help
accelerate artificial intelligence
research the same thing has been
happening in drug discovery which is
every Innovation that we make to make
drug Discovery faster and cheaper and
more efficient that's basically just to
keep things at the same Pace um there
were Innovations about um uh substance
assays so basically testing uh many many
substances in parallel uh back in the
day you'd have to do it manually and you
do dozens at a time and then inventions
were made to do it uh thousands at a
time um even though we can do more
parallel testing and Drug Discovery it
is still exponentially more expensive um
to discover new drugs and now we have
Technologies like Alpha fold 2 soon
Alpha fold 3 being added to that mix and
again it will help but that's basically
uh preventing it from from progress from
dwindling to practically zero so again
I'm pretty skeptical on the idea of an
intelligence explosion um the the
longest term uh the longest trend line
that we have is Moore's law which uh you
know Ray Kurtz don't bet against him if
you read some of his older books some of
his predictions are wrong but he always
goes back to that data that that one
really powerful model of moris law which
is why I'm spending so much time talking
about the efficient compute Frontier pay
attention to this model this is probably
going to be a very important uh uh
function of predicting uh intelligence
as it moves
forward so if we do get some kind of
jump if gp5 does represent a larger
safety risk um like I said the safety
the safety Community uh those
conversations will be uh enlivened to
say the least um if so put it this way
clae 3.5 and gp4 are already pretty
smart um when properly prompted they're
smarter than a lot of humans um so if
GPT 5 gets it to like you know the 99th
percentile or the 90th percentile or 90
you know in the upper echelons of human
capability what some people are saying
is that that GPT 5 is going to be PhD
equivalent um again if that's even if
it's remotely true if it's Bachelors or
Masters or PhD equivalent intelligence
we basically then have an infinite
supply of knowledge workers uh which to
say that that would disrupt the current
economic Paradigm as an understatement
however what I will say is that it will
take time to roll out um and also what
happens is the way that I think about it
mathema Al is that greater intelligence
means greater options or greater number
of potentials um and so you could think
of it as like a growing funnel right
like the expansion of the universe when
you have low intelligence the number of
options and number of changes you can
make to the world are relatively limited
think about mice mice have very low
intelligence so basically all they can
do is build a nest beavers are a little
bit smarter so they can build dams
humans are a lot smarter than Beaver so
we can reshape the entire environment as
artificial intelligence comes uh you
know human level and then beyond its
ability to make changes to the world or
to influence long-term outcomes also
goes up I'm not going to say it goes up
exponentially because I also suspect
there might be diminishing returns to
intelligence um basically there's always
going to be incomplete and imperfect
information and in those situations no
amount of intelligence can compensate
for stuff that you just don't know if
you don't have the knowledge if you
don't have the information you can guess
you can create educated guesses and you
can plan for multiple contingencies but
you don't know until you actually go
find
out um so I've also talked about my uh
why I'm less worried um in case you
haven't seen it already I created a pdom
calculator you can get to it at davh
app.io um this is my actual current pdom
um at least as caused by artificial and
super artificial super intelligence so
the it basically you plug in four values
and it and it uh calculates what your
pdom is uh based on Bas theorem
um so will a ASI arrive within 10 years
I had give it a 40% chance um will it be
agentic and by agentic I mean will it
seek its own goals rather than pursue
human defined goals I haven't seen any
evidence that that uh agentic emerges
from these machines in fact there are a
preponderance of papers showing ways
that we can steer these models and I
have not yet found any papers showing
that models are in intrinsically
encourageable um will it be
uncontrollable if it is agentic then I
don't think it'll be controllable but I
don't think it's going to be agentic um
and then will it be hostile to human
existence the more time goes by the less
concerned I am about its hostility
because again we are selecting for
benevolent machines right now um and
there again has been as far as I can
tell zero papers uh illustrating uh
latent malevolence in these machines or
even latent indifference one of the
things that a lot of people in the Doom
Community talk about is well it could
kill us even if it's indifferent to
humans I don't see any evidence of
indifference emerging um I only see
evidence of
benevolence so my personal take is when
I made that prediction about AGI by
September 2024 again it was based on
data there was things that I didn't
consider and I also remember when Sam
Alman said gp4 will be disappointing in
hindsight yes gp4 has been kind of
disappointing it's been useful and of
course it's gotten the media attention
um and also one thing that I was
thinking about before I made this video
was remember uh what Sam Alman said is
that they actually wanted to release a
product to get people used to the idea
of AI before inventing AGI and that's
one of the reasons they were
experimenting with chat GPT they didn't
expect chat GPT to blow up the way that
it did um but the entire reason that
they were exploring that product was
because they wanted to do something that
would say hey guys AI is actually
happening it's time to have these
conversations um so that people would
have time to adapt to this new way of
thinking and this new world that we're
creating and it work worked um again
that could be postao justification it
might have just been an experiment and
and maybe it was not you know Sam
alman's Grand planned um it's it
behooves Sam Alman to say ah yes that
was on purpose um even if it was
completely on accident um however if gp5
lives up to the hype lives up to the
most extreme predictions which again
take it with a grain of salt then maybe
my prediction was right I'm not going to
go one way or another because I can see
entirely too much evidence for and
against and so basically I'm taking a
wait and see time will tell kind of
approach so with all that being said
thanks for watching to the end hope you
liked it cheers
Ver Más Videos Relacionados
Stunning New OpenAI Details Reveal MORE! (Project Strawberry/Q* Star)
The 12 Reasons Why GPT 5 Will Change The World
Sam Altman's Surprising WARNING For GPT-5 - (9 KEY Details)
AI Predictions ― 2024 to 2030 ― Year By Year Breakdown w/ Insider Info
OpenAI'S "SECRET MODEL" Just LEAKED! (GPT-5 Release Date, Agents And More)
OpenAI's New Model Releases LEAKED | Sam Altman talks about AGI, UBI, GPT-5 and what Agents will be
5.0 / 5 (0 votes)