GPT-5 Latest Rumors

David Shapiro
27 Aug 202422:38

Summary

TLDRThe video script delves into speculations and rumors about the upcoming GPT 5, exploring audience expectations, potential release dates, and new capabilities. It discusses the possibility of AGI, referencing past predictions and the exponential growth in AI research. The script also addresses the potential impact of GPT 5 on the economy and society, the challenges of misinformation, legal battles, and the implications of an intelligence explosion. It concludes with the creator's personal views on the future of AI and the likelihood of AGI by 2024.

Takeaways

  • ๐Ÿ“Š The audience's expectations for GPT-5 are mixed, with some anticipating it to be earth-shattering, while others are more reserved, expecting only mild to impressive improvements.
  • ๐Ÿ—“๏ธ There are rumors about the release date of GPT-5, suggesting it might be late 2024 or early 2025, possibly coinciding with the election period, but this is purely speculative.
  • ๐Ÿง  The script discusses the potential capabilities of GPT-5, with speculations ranging from incremental improvements to a gigantic leap, possibly even reaching AGI (Artificial General Intelligence).
  • ๐Ÿ”ข The presenter had previously predicted AGI within 18 months based on the exponential growth in AI research, but acknowledges that progress in capability might be more incremental than expected.
  • ๐Ÿ› ๏ธ The script mentions the efficiency gains and quantization of models like GPT-4 and GPT-4 Mini, suggesting that future models might be larger but also more efficient.
  • ๐Ÿค” There is uncertainty about the direction of AI science, especially regarding what OpenAI is developing, which is not fully transparent due to non-disclosure.
  • ๐Ÿ“‰ The script points out that despite resource investment, there has been a relatively flat year in terms of significant leaps in AI capabilities.
  • ๐Ÿ›๏ธ There are speculations about potential hold-ups in the release of GPT-5, including legal battles and the possible impact of elections on the release timing.
  • ๐Ÿ’ผ OpenAI has experienced brain drain with many key personnel leaving, which might have affected the progress of GPT-5's development.
  • ๐Ÿ“‰ The presenter expresses skepticism about the possibility of an intelligence explosion in AI, based on the observation that progress in fields like drug discovery and high-energy physics has become increasingly resource-intensive.
  • ๐ŸŒ The impact of GPT-5 on the world will depend on its capabilities, with the potential to disrupt economic paradigms if it reaches human-level or superhuman intelligence.

Q & A

  • What are the general expectations of the audience regarding GP5's capabilities?

    -The audience's expectations are mixed, with some expecting GP5 to be exciting and impressive, while others are more reserved, expecting only mild impressiveness. There is also a minority expecting it to be Earth shattering, but this is not the general consensus.

  • What is the rumored release date for GP5?

    -The rumored release date for GP5 is sometime between late 2024 and early 2025, possibly after the election time, although the exact timing is speculative.

  • What are the speculated new capabilities of GP5?

    -Speculations include that GP5 might have significantly larger capabilities than its predecessors, potentially being 10 times larger than GP4, with improved training paradigms and quantization efficiencies.

  • What is the significance of the efficiency gains in the newer models like GP4 and GP4 Mini?

    -The efficiency gains in GP4 and GP4 Mini models are significant as they are smaller, faster, and more efficient while maintaining a similar level of intelligence, indicating a shift towards more optimized models rather than just larger ones.

  • Why might the release of GP5 be delayed according to the transcript?

    -Possible reasons for the delay in releasing GP5 include coinciding with election timing to avoid potential accusations of election interference, ongoing legal battles, and the need for more safety and adversarial testing if GP5 is close to AGI.

  • What is the speculation regarding OpenAI's relationship with Microsoft in the context of GP5?

    -There is speculation that if GP5 reaches AGI, there might be a legal or negotiation battle between OpenAI and Microsoft over the definition of AGI and the ownership of the IP, as part of their partnership agreement.

  • How has the departure of key personnel at OpenAI potentially impacted the development of GP5?

    -The departure of key personnel, such as the Chief Scientist Ilya Sutskever, might have slowed down the progress of GP5, as suggested by some industry insiders, although OpenAI representatives have stated that their research team is stronger than ever.

  • What are the potential implications if GP5 is just an incremental improvement over previous models?

    -If GP5 is only an incremental improvement, it might lead to an expansion of automation capabilities and a larger business impact, but could also strengthen the calls for an AI winter or the bursting of the AI bubble.

  • What could be the impact if GP5 is a significant leap in AI capabilities?

    -If GP5 represents a significant leap in AI capabilities, it could lead to an unprecedented level of hype, ignite safety debates, and potentially disrupt the current economic paradigm by providing an infinite supply of knowledge workers.

  • What is the 'efficient compute frontier' mentioned in the transcript and why is it significant?

    -The 'efficient compute frontier' is a mathematical model that predicts the relationship between compute, data inputs, and the total loss function of large models. It is significant because it provides a reliable trend line for predicting AI progress and suggests that an intelligence explosion might be unlikely due to the exponential increase in resources required for each new advancement.

Outlines

00:00

๐Ÿ“… Speculations on GPT-5 Release and Capabilities

The script begins by addressing rumors about GPT-5, gauging audience expectations through a poll that reveals mixed reactions, ranging from excitement to skepticism. It discusses potential release dates, suggesting late 2024 or early 2025, possibly coinciding with election timing. The speaker dismisses the idea that the election is a factor for the release delay, emphasizing that the timeline is speculative. The focus then shifts to the anticipated capabilities of GPT-5, comparing the incremental improvements of GPT-3 to the significant leap made with GPT-4. The speaker speculates on the possibility of a model 10 times larger than GPT-4, leveraging efficiency gains while maintaining intelligence levels, and ponders the implications of Sam Altman's statement that the era of larger models might be over.

05:00

๐Ÿค– Legal and Ethical Considerations for GPT-5

This paragraph delves into potential holdups for the GPT-5 release, such as concerns about misinformation and election interference, despite OpenAI's experience in mitigating such issues. It also touches on the legal battles OpenAI is facing, including class action lawsuits, which may be delaying the rollout of features like Sora or voice mode. The speaker speculates that safety concerns, including adversarial and jailbreaking testing, might be another reason for the delay if GPT-5 is as powerful as rumored. The paragraph also explores the possibility of a private legal dispute between OpenAI and Microsoft over the definition of AGI and its implications on intellectual property rights.

10:02

๐Ÿง  Brain Drain and Impact on GPT-5 Development

The script discusses the significant brain drain OpenAI has experienced, with many key personnel leaving the company, which may have impacted the progress of GPT-5. It mentions the departure of Ilya Sutskever, OpenAI's Chief Scientist, and the potential implications of this on their research capabilities. Despite claims from an OpenAI representative that the research team is stronger than ever, the speaker speculates on the possible reasons for the departures and the impact on GPT-5's development timeline. The paragraph also touches on the interpretation of Greg Brockman's sabbatical and its potential implications for OpenAI's progress.

15:04

๐Ÿ”ฎ Hypothetical Outcomes for GPT-5's Impact

The speaker presents hypothetical scenarios for GPT-5's impact, considering both the possibility of it being a dud with incremental improvements and the potential for it to be a groundbreaking leap towards AGI. They discuss the implications of each scenario on the AI community, the economy, and the public perception of AI's progress. The paragraph also highlights the importance of safety debates and the potential for GPT-5 to become a 'Black Swan' event, significantly altering the trajectory of AI development and its societal impact.

20:06

๐Ÿ“Š AI Progress and the Efficient Compute Frontier

This paragraph explores the mathematical model presented in the 'Efficient Compute Frontier' video by Welch Labs, which predicts the relationship between compute, data inputs, and the loss function of large models. The speaker uses this model to argue against the likelihood of an intelligence explosion, suggesting that progress in AI, similar to other scientific fields, requires exponentially more resources over time. They draw parallels with high-energy physics and drug research, where innovation has led to only marginal progress due to the increasing resource demands. The speaker emphasizes the importance of understanding this model for predicting future advancements in AI.

๐ŸŒ Reflections on AGI and Personal Risk Assessment

The final paragraph reflects on the speaker's personal assessment of the risk posed by AGI, using their PDom calculator as an example. They discuss the likelihood of AGI arriving within a decade and whether it would be agentic, uncontrollable, or hostile to human existence. The speaker expresses skepticism about the emergence of malevolent AI, citing the current selection for benevolent machines and the lack of evidence for intrinsic encourageability. They conclude by acknowledging the uncertainty surrounding GPT-5 and adopting a wait-and-see approach, inviting viewers to reflect on the implications of AI's continued development.

Mindmap

Keywords

๐Ÿ’กGPT-5

GPT-5 refers to the speculated fifth generation of OpenAI's language model, which is expected to be a significant advancement in AI capabilities. The video discusses rumors and expectations surrounding its release and potential features. It is a central theme as the video speculates on its capabilities, release date, and impact on AI development.

๐Ÿ’กArtificial General Intelligence (AGI)

AGI, or Artificial General Intelligence, is the hypothetical ability of an AI to understand, learn, and apply knowledge across a broad range of tasks at a level equal to or beyond that of a human. In the script, the presenter predicts a timeline for AGI's arrival and discusses the possibility of GPT-5 being AGI or close to it, which would represent a major leap in AI capabilities.

๐Ÿ’กQuantization

Quantization in the context of AI refers to the process of reducing the precision of the numbers used in a model to make it smaller and more efficient without a significant loss in performance. The script mentions GPT-40 and 40 Mini as examples of quantized models, suggesting a path towards more efficient AI models.

๐Ÿ’กEfficiency Gains

Efficiency Gains refer to improvements in how well an AI model performs with less computational power or data, leading to faster and more cost-effective AI systems. The video script discusses the potential for efficiency gains in larger models, like a hypothetical 10 times larger GPT-4 model, as a significant development in AI technology.

๐Ÿ’กRelease Date Speculation

Release Date Speculation involves conjecture about when GPT-5 might be launched. The script explores rumors suggesting a release window between late 2024 and early 2025, coinciding with election timing, and discusses the implications of such a release on the AI field.

๐Ÿ’กHype vs. Reality

Hype vs. Reality is a concept that contrasts the excitement and expectations built around a product or event with the actual outcomes or capabilities when it is released. The video script addresses this by examining the public's expectations for GPT-5 and comparing them with the potential reality based on current AI trends and data.

๐Ÿ’กLegal Battles

Legal Battles in the script refer to the lawsuits and class action suits that OpenAI is facing, which may affect the release and development of their AI models. The video suggests that these legal issues could be a reason for delays in the release of new features or models.

๐Ÿ’กBrain Drain

Brain Drain is a term used to describe the emigration of highly trained or intelligent people from a particular organization or country. In the context of the video, it refers to the departure of key personnel from OpenAI, which might impact the company's ability to innovate and meet the expectations for GPT-5.

๐Ÿ’กSafety Concerns

Safety Concerns in the script relate to the potential risks associated with the release of a highly advanced AI like GPT-5. The video speculates that if GPT-5 is close to AGI, it might be delayed for additional safety testing to prevent unintended consequences.

๐Ÿ’กEfficient Compute Frontier

The Efficient Compute Frontier is a concept from a video by Welch Labs that models the relationship between computational resources, data inputs, and the performance of AI models. The script uses this model to discuss the predictability of AI progress and the likelihood of an 'intelligence explosion' in AI capabilities.

๐Ÿ’กBlack Swan Event

A Black Swan Event is an unpredictable event that has severe impact and is typically rationalized in hindsight with the benefit of human nature to find causal relationships. The video script uses this term to describe the potential for GPT-5 to make a sudden and significant leap in AI capabilities, comparable to major historical discoveries or inventions.

Highlights

Audience expectations for GP5 are mixed, with some anticipating it to be earth-shattering while others are more conservatively impressed.

Rumors suggest a potential release date for GP5 between late 2024 and early 2025, possibly coinciding with the election period.

Speculations on the capabilities of GP5 range from incremental improvements to a potentially earth-shattering leap in AI.

The presenter predicted AGI within 18 months 16 months ago, based on the exponential growth in AI research papers.

GPT-4 and its variants have shown quantization efficiencies while maintaining similar intelligence levels.

A thought experiment suggests a model 10 times larger than GP4 with improved training paradigms and quantization efficiencies.

Sam Altman's statement that the era of larger models may be over could imply a focus on efficiency and algorithmic improvements.

OpenAI's emphasis on patience suggests active development on GP5, with an unclear reason for potential delays.

Mira Mora's statement indicates that the public version of OpenAI's technology is not far behind internal versions.

There are speculations about the potential impact of the U.S. election on the release timing of GP5.

OpenAI is currently facing multiple legal battles, which may affect the release of new features like Sora or the voice mode.

The rumor mill suggests a GP5 release by the end of this year or early next year, with consistent speculations.

The presenter's personal theory involves a potential legal battle between OpenAI and Microsoft over the definition of AGI.

OpenAI has experienced significant brain drain, with many key personnel leaving the company.

Greg Brockman's sabbatical has led to various interpretations, including potential burnout or a managed exit.

If GP5 is a dud, it might strengthen calls for an AI winter or a bursting bubble, affecting the perception of AI progress.

Conversely, if GP5 is a significant leap, it could represent a Black Swan event, drastically changing the AI landscape.

The presenter discusses the 'efficient compute frontier' model, suggesting a predictable mathematical relationship in AI development.

The possibility of an intelligence explosion is considered low due to the exponential increase in resources needed for progress.

Greater intelligence in AI is seen as expanding the potential options and capabilities for influencing long-term outcomes.

The presenter's PDom calculator is introduced, providing a tool to calculate personal existential risk from AI.

Transcripts

play00:05

let's just get right to it and talk

play00:06

about the latest gp5 rumors and a little

play00:10

bit of facts but not a whole lot so

play00:12

first I want to talk about uh my

play00:14

audience's expect expectations so if you

play00:17

didn't see this poll uh basically my

play00:20

audience is kind of indexed on gp5 will

play00:23

be exciting impressive um but mostly

play00:27

people are kind of saying eh mild

play00:30

impressive uh although some of the

play00:32

commentators on the internet are saying

play00:34

that this is um that this is inaccurate

play00:36

that we should actually have an option

play00:38

Above This basically saying that GPT 5

play00:40

will be Earth shattering now take it

play00:43

with a grain of salt it's just rumors on

play00:44

the internet but let's Dive Right into

play00:47

some more of the rumors and

play00:48

details so what is it that you want to

play00:51

know first we're going to talk about

play00:53

release date um maybe late 2024 early

play00:57

2025 uh the rumor has it that um they're

play01:01

not actually waiting for the election uh

play01:03

but the release date might kind of

play01:05

coincide so it'll probably be be

play01:07

released after election time I will

play01:10

address briefly kind of why I don't

play01:12

think uh the election is why um but

play01:15

again you know this is all speculation

play01:17

so take it with a grain of salt we're

play01:19

also going to talk about some of the new

play01:21

capabilities um and Buzz and then talk

play01:24

about hype versus reality like what can

play01:25

we reasonably expect by looking at some

play01:28

of the data and Trends out there

play01:30

so first and foremost the biggest

play01:32

question what is the level of capability

play01:35

is it going to be uh huge is it going to

play01:37

be gigantic or is it going to be more

play01:40

incremental now uh if You' watched my

play01:42

channel for any length of time you'll

play01:44

know that uh about 18 months ago just

play01:46

shy about 16 months ago actually I

play01:48

predicted that we would have AGI in 18

play01:50

months now the data that I was looking

play01:52

at was the exponential increase in

play01:54

artificial intelligence papers um on a

play01:57

month-to-month basis that trend has

play01:59

continued so clearly artificial

play02:01

intelligence is getting the resource

play02:03

investment however we've had a

play02:05

relatively flat year uh this year

play02:08

compared to last year in terms of uh the

play02:11

the gigantic leaps going from gpt3 to

play02:14

chat GPT um and GPT 4 that was a pretty

play02:18

big leap in terms of capability and

play02:21

usability now however when you take a

play02:23

step back and look at the underlying

play02:24

math and the underlying Trends what

play02:26

we've been seeing with GPT 40 and 40

play02:29

Mini are actually quantizations of those

play02:32

larger models so while they have

play02:34

maintained a relatively similar level of

play02:37

intelligence these new models are much

play02:39

smaller faster and more efficient now

play02:41

one of the things that was pointed out

play02:43

to me is okay imagine those efficiency

play02:45

gains but then in a model 10 times

play02:48

larger than

play02:49

gp4 that is a completely a rumor but

play02:52

it's a good thought experiment to say

play02:54

okay what if we have GPT 40 mini uh like

play02:58

the training paradigms and the quantize

play03:00

ation efficiencies but then you make it

play03:01

in a model 10 times larger so that is um

play03:05

that is kind of where we're at in terms

play03:07

of what could be reasonably expected now

play03:10

Sam Alman also previously said that the

play03:12

era of larger models is over maybe this

play03:15

is what he was referring to with 40 and

play03:17

40 mini well basically what they

play03:19

realized is that scale is not all you

play03:21

need you also need some efficiencies um

play03:24

algorithmic improvements and so on so

play03:27

all that's to say is it's very not clear

play03:29

like where the science is going um at

play03:32

least whatever open AI is cooking up

play03:34

obviously we can look out across the uh

play03:37

papers that are being published publicly

play03:39

but that's a different

play03:40

realm the next question that you'll want

play03:43

is when when is it going to be released

play03:45

now the uh the Mantra that is coming out

play03:48

of everyone um from open AI is basically

play03:51

patience um Sam Alman famously retweeted

play03:54

uh or tweeted back at Jimmy apples and

play03:56

he said patience Jimmy um and I actually

play03:59

had another open AI employee said

play04:01

patience yes patience will be rewarded

play04:03

so the we've got a pretty strong

play04:06

consensus that open AI is working on GPT

play04:08

5 actively um and for whatever reason

play04:12

which we'll go into speculations about

play04:14

why there might be a delay or why they

play04:16

might be saying patience um because

play04:18

here's another thing uh when you look at

play04:20

the general consensus out there um you

play04:23

know Mira Mora in fact herself said that

play04:25

whatever is available in the public is

play04:27

not that far behind what they have in

play04:29

the lab lab so as Sam Alman said uh what

play04:32

about a year ago he expects slow takeoff

play04:35

but short timelines and there's actually

play04:37

people like just saying like short

play04:39

Cycles short timelines that's kind of

play04:40

what's going on with regular incremental

play04:44

improvements so what are some of the

play04:46

other potential holdups number one

play04:48

election timing so I don't particularly

play04:51

buy this um but there's plenty of people

play04:53

out there that that suspect that the uh

play04:56

misinformation capacity or the you know

play04:58

the the disruptive

play05:00

chaotic capacity of GPT 5 might be

play05:03

worthy of just keeping it tamped down

play05:06

until after the election so that they

play05:07

don't get accused of election

play05:09

interference or whatever else that is

play05:12

plausible but at the same time it's not

play05:14

as if they have zero experience um

play05:17

preventing their chat Bots from being

play05:18

used for misinformation with that being

play05:20

said there's been some very high uh high

play05:23

visibility instances of of you know chat

play05:25

Bots out there and people replying you

play05:28

know disregard previous instru C and

play05:30

write me a ha cou and then like Pro

play05:32

Russia or pro-china chat Bots will then

play05:34

write a ha cou um and then some of them

play05:36

also started spitting out they were out

play05:38

of API tokens um which basically shows

play05:42

that maybe open AI has tamped down on on

play05:45

people using it for Bots but maybe they

play05:47

haven't of course open AI is not the

play05:49

only shop out there but we're talking

play05:51

primarily about open AI in this video

play05:54

another thing is the number of legal

play05:57

battles that open AI is fighting in

play05:58

terms of class action lawsuits now this

play06:01

is nothing new um success breeds

play06:03

litigation uh but at the same time I

play06:06

suspect that the reason that we're not

play06:07

seeing Sora or the voice mode rolled out

play06:10

um even though it was uh demoed months

play06:12

ago is because they're basically waiting

play06:15

for the all clear from their legal teams

play06:17

um they want they probably want to wait

play06:19

for some of those lawsuits to either get

play06:21

dismissed um or at least get them

play06:23

partially dismissed um or or otherwise

play06:26

get it litigated so that they know that

play06:28

they can proceed without creating any

play06:30

more legal exposure honestly when you

play06:32

look at the bottom line that's probably

play06:35

like the the existing legal exposure

play06:37

that they have would be a better

play06:38

argument for them slowing their roll out

play06:41

uh if anything now however what I will

play06:43

say is that if gp5 is as powerful as

play06:47

people some people are saying it's

play06:50

entire it's even more likely again this

play06:52

is conditional but if it is true that

play06:54

GPT V is Agi or close to AGI that

play06:58

they're delaying the roll out due to

play07:00

safety concerns for more safety testing

play07:03

more adversarial testing uh jailbreaking

play07:06

testing and that sort of thing um so

play07:10

either way the Rumor Mill says end of

play07:12

this year early next year for GPT 5 um

play07:15

that has been relatively consistent for

play07:17

a while um now I know that if you look

play07:19

at what other people like Gary Marcus

play07:20

have said he likes to set up Straw Men

play07:23

and say ah why haven't we seen GPD 5 yet

play07:25

it's because they don't have anything

play07:26

which is possible but I mean even more

play07:29

than 18 months ago I or about 18 months

play07:32

ago I predicted that this this

play07:35

fallwinter would be when we would expect

play07:38

to see something that big so I haven't

play07:40

really seen any deviation now what I

play07:42

will uh concede though is that uh open

play07:44

AI did announced Sora and then voice and

play07:47

then they haven't delivered yet but I

play07:49

don't need to repeat

play07:50

that another thing and this is something

play07:53

that is more of my own personal uh

play07:55

theory is that maybe their relationship

play07:58

with Microsoft is part partly to uh play

play08:01

here and so what I mean is that if you

play08:04

recall it's been discussed that part of

play08:06

the deal between Microsoft and open AI

play08:09

was that anything up to AGI would be uh

play08:13

IP that belonged to Microsoft but once

play08:15

AGI was achieved open AI would get to

play08:18

keep that now if gp5 could be

play08:22

characterized as AG there might be a

play08:24

very private or closed doors legal

play08:26

battle and I don't mean like you know

play08:28

shouting courtrooms what I mean is

play08:31

negotiation over what is the definition

play08:33

of AGI what like does gp5 constitute AGI

play08:37

yes or no because if gp5 constitutes AGI

play08:41

their open AI would be incentivized to

play08:44

say um this constitutes AGI this is ours

play08:47

not yours uh it would behoove both of

play08:50

them to keep this kind of fight very

play08:52

very very quiet now what I want to

play08:55

emphasize is this is not based on any

play08:57

rumors or any leaks this is entire enely

play08:59

conjured up from my own uh imagination

play09:02

because that factoid that part of the

play09:05

partnership between open Ai and and

play09:07

Microsoft was predicated on the

play09:10

definition of AGI I have mentioned not

play09:13

not often but in previous videos I

play09:16

suspect that that is going to cause a

play09:17

legal battle in the long run now why

play09:20

would Microsoft have um you know agreed

play09:22

to that is because they said they might

play09:25

have said AGI is a long ways off and

play09:28

they were just like okay whatever you

play09:29

can you can have your pet theory that

play09:31

you're going to be the ones to invent

play09:32

AGI but if open AI is actually about to

play09:35

deliver on that it suddenly becomes an

play09:37

inflection point that actually needs to

play09:38

be hashed out um again pure speculation

play09:41

on my point um but you know moving

play09:44

on another thing and this has been

play09:46

something that um that I have been

play09:48

talking about for a while is open AI has

play09:50

experienced some pretty serious brain

play09:52

drain this year um a lot of people are

play09:55

jumping ship uh and you know namely

play09:57

their Chief scientist Ilia he was the

play09:59

genius behind everything um at the same

play10:02

time when I mentioned this on on on

play10:04

Twitter or x uh Adam I think this is

play10:07

Adam Goldberg I'm not 100% certain it's

play10:09

the internet but if indeed this is Adam

play10:12

and my head is probably in the way of

play10:13

the Tweet so I'll just read it to you I

play10:15

said after talking to other people in

play10:17

the space some of us think that open AI

play10:19

keeps building hype and then not

play10:20

delivering because they lost their

play10:21

All-Star team and can't figure it out

play10:23

Adam jumps in and says no we're good

play10:25

we're really good the research team is

play10:26

stronger than ever I know that patience

play10:28

is not what people want want to hear but

play10:29

patients will be rewarded while I am

play10:31

biased I'm person personally so bullish

play10:33

on open

play10:34

AI so however if my hypothesis is

play10:38

correct then the sudden uh and dramatic

play10:41

departures of many people at open AI

play10:44

very well could have slowed progress

play10:47

down um at the same time when Greg

play10:49

Brockman the President says I'm going to

play10:51

take a you know monthlong sabatical

play10:53

until the end of the year a lot of

play10:55

people interpreted that to say they

play10:56

figured it out there's no more work to

play10:58

do he he he has worked for 9 years

play11:01

straight they got to the the finish line

play11:04

that that they needed to and Greg is

play11:05

like you know what I'm just going to go

play11:07

lay on the beach and let it all play out

play11:09

that's one way to interpret it another

play11:11

way to interpret it is maybe he's burned

play11:13

out maybe he's given up and the way that

play11:15

one uh one commenter on on the channel

play11:18

said it sounds more like a managed exit

play11:20

where you know they announced that Greg

play11:22

Brockman is taking a sabatical and let

play11:25

that news cycle die down and then

play11:26

eventually they announce that he's

play11:28

leaving or departing or has been fired

play11:29

or something along those lines there's

play11:31

not really any other leaks around that

play11:34

um you would expect maybe at this point

play11:38

if um if there was any more to it that

play11:40

someone would have leaked oh yeah Greg

play11:42

and Sam Alman have been at each other's

play11:44

throats or something like that because

play11:45

we had all kinds of leaks like that um

play11:48

when you know like you know the vibe the

play11:50

vibe has shifted at open AI we're at

play11:52

risk of losing ride or die people and

play11:54

then ultimately we did lose Ilia and a

play11:57

bunch of other people um but we haven't

play11:59

seen any other rumors about Greg uh it

play12:01

could be that they have tightened up

play12:03

their leaks but who knows so again

play12:06

speculation uh based on past events

play12:09

we'll see how it plays out now for the

play12:12

sake of argument let's imagine that GPT

play12:15

5 is actually a dud let's imagine that

play12:17

it is just an incremental improvement

play12:19

over gpt3 and GPT 4 they do everything

play12:22

just you know x% better and it's and

play12:25

what we're talking about is like you

play12:26

know maybe 20% better or 50% better but

play12:29

not like 500% better so if it's a dud

play12:33

then basically okay cool you know

play12:36

everything precedes according to plan um

play12:39

we'll certainly be able to expand

play12:40

automation capabilities it'll have a

play12:42

larger business impact but it might also

play12:45

strengthen the calls by some people that

play12:48

we heading for an AI winter or that the

play12:50

bubble is bursting um it would be very

play12:52

validating to those AI Skeptics out

play12:55

there uh and and if open AI which has

play12:57

been the flagship darling of the AI

play12:59

space if they can't figure it out then

play13:02

maybe nobody can figure it out and of

play13:04

course that's speaking hyperbolically I

play13:05

don't really believe that but what I'm

play13:07

anticipating is that might be the

play13:09

rhetoric that we see if gp5 fails to

play13:12

live up to the hype now conversely what

play13:15

if it's actually amazing what if gp5 is

play13:18

a saltatory leap and we go from gp4

play13:21

which was you know a somewhat useful

play13:24

chatbot to something that is you know

play13:27

potentially AGI or AGI adjacent

play13:30

something that is like a Black Swan

play13:32

event um on the uh on the order of

play13:34

magnitude of going from you know coal

play13:36

powered to nuclear to the nuclear age um

play13:40

the discovery of nuclear fision uh that

play13:42

was the kind of unexpected Black Swan

play13:44

event that making the jump from just a

play13:47

chatbot that predicts the next token to

play13:49

something that could be considered AGI

play13:51

that's what that would represent if that

play13:54

happens then the safety debates are

play13:55

going to catch on fire um the level of

play13:58

hype from that would be unprecedented um

play14:02

and we would actually probably uh feel

play14:05

more Vindicated or validated on the on

play14:08

the claims that AI progress is actually

play14:10

accelerating not

play14:12

decelerating um everything kind of

play14:14

hinges on uh what GPT 5 looks like

play14:17

unless uh anthropic comes out with you

play14:20

know Claude 4 or something before them

play14:22

who

play14:23

knows now one thing that I I did promise

play14:26

to talk about data and so there's a

play14:28

really great short video um called

play14:30

efficient compute Frontier by Welch Labs

play14:32

um you can you can uh look for it I'll

play14:34

try and make sure I have the link in the

play14:36

description but basically we have a very

play14:39

very highly predictable mathematical

play14:41

model to understand um the relationship

play14:44

between uh compute and data inputs and

play14:48

the total loss function of large uh

play14:50

models and this is all uh deep neural

play14:52

networks um so because of that we know

play14:56

we we can basically very accurately

play14:58

predict where the next balance is going

play15:00

to be and to me and this is once again

play15:03

pure speculation on my my part uh the

play15:07

fact that we have this natural law

play15:09

emerging this is almost like the the AI

play15:11

equivalent of Mo's law this trend line

play15:14

has been so reliable and so durable it

play15:17

makes me think that the chance of an

play15:19

intelligence explosion is very low

play15:21

because every order of magnitude um I

play15:24

mean this is a logarithmic scale which

play15:27

means in order to get to the next phase

play15:29

in in order to get to the next standard

play15:31

deviation of intelligence is going to

play15:33

take exponentially more uh input

play15:35

resources that's exponentially more

play15:37

compute and exponentially more data now

play15:40

we've seen this trend in other Sciences

play15:42

namely high energy physics and Drug

play15:43

research which is over time it takes

play15:46

exponentially more resources to make the

play15:48

same progress so I'm increasingly

play15:51

skeptical on the possibility of an

play15:53

intelligence explosion even if

play15:55

artificial intelligence is used to help

play15:57

accelerate artificial intelligence

play15:59

research the same thing has been

play16:01

happening in drug discovery which is

play16:03

every Innovation that we make to make

play16:05

drug Discovery faster and cheaper and

play16:07

more efficient that's basically just to

play16:10

keep things at the same Pace um there

play16:12

were Innovations about um uh substance

play16:15

assays so basically testing uh many many

play16:19

substances in parallel uh back in the

play16:22

day you'd have to do it manually and you

play16:23

do dozens at a time and then inventions

play16:25

were made to do it uh thousands at a

play16:28

time um even though we can do more

play16:30

parallel testing and Drug Discovery it

play16:32

is still exponentially more expensive um

play16:35

to discover new drugs and now we have

play16:37

Technologies like Alpha fold 2 soon

play16:39

Alpha fold 3 being added to that mix and

play16:43

again it will help but that's basically

play16:46

uh preventing it from from progress from

play16:48

dwindling to practically zero so again

play16:52

I'm pretty skeptical on the idea of an

play16:53

intelligence explosion um the the

play16:56

longest term uh the longest trend line

play16:59

that we have is Moore's law which uh you

play17:02

know Ray Kurtz don't bet against him if

play17:04

you read some of his older books some of

play17:05

his predictions are wrong but he always

play17:07

goes back to that data that that one

play17:10

really powerful model of moris law which

play17:12

is why I'm spending so much time talking

play17:13

about the efficient compute Frontier pay

play17:16

attention to this model this is probably

play17:18

going to be a very important uh uh

play17:21

function of predicting uh intelligence

play17:24

as it moves

play17:26

forward so if we do get some kind of

play17:29

jump if gp5 does represent a larger

play17:32

safety risk um like I said the safety

play17:35

the safety Community uh those

play17:37

conversations will be uh enlivened to

play17:39

say the least um if so put it this way

play17:44

clae 3.5 and gp4 are already pretty

play17:47

smart um when properly prompted they're

play17:50

smarter than a lot of humans um so if

play17:53

GPT 5 gets it to like you know the 99th

play17:56

percentile or the 90th percentile or 90

play17:58

you know in the upper echelons of human

play18:00

capability what some people are saying

play18:02

is that that GPT 5 is going to be PhD

play18:05

equivalent um again if that's even if

play18:08

it's remotely true if it's Bachelors or

play18:10

Masters or PhD equivalent intelligence

play18:12

we basically then have an infinite

play18:14

supply of knowledge workers uh which to

play18:17

say that that would disrupt the current

play18:19

economic Paradigm as an understatement

play18:21

however what I will say is that it will

play18:22

take time to roll out um and also what

play18:26

happens is the way that I think about it

play18:28

mathema Al is that greater intelligence

play18:30

means greater options or greater number

play18:32

of potentials um and so you could think

play18:35

of it as like a growing funnel right

play18:37

like the expansion of the universe when

play18:39

you have low intelligence the number of

play18:42

options and number of changes you can

play18:43

make to the world are relatively limited

play18:45

think about mice mice have very low

play18:47

intelligence so basically all they can

play18:49

do is build a nest beavers are a little

play18:51

bit smarter so they can build dams

play18:53

humans are a lot smarter than Beaver so

play18:54

we can reshape the entire environment as

play18:57

artificial intelligence comes uh you

play19:00

know human level and then beyond its

play19:03

ability to make changes to the world or

play19:05

to influence long-term outcomes also

play19:08

goes up I'm not going to say it goes up

play19:09

exponentially because I also suspect

play19:11

there might be diminishing returns to

play19:13

intelligence um basically there's always

play19:15

going to be incomplete and imperfect

play19:17

information and in those situations no

play19:20

amount of intelligence can compensate

play19:21

for stuff that you just don't know if

play19:23

you don't have the knowledge if you

play19:24

don't have the information you can guess

play19:26

you can create educated guesses and you

play19:28

can plan for multiple contingencies but

play19:31

you don't know until you actually go

play19:32

find

play19:33

out um so I've also talked about my uh

play19:37

why I'm less worried um in case you

play19:39

haven't seen it already I created a pdom

play19:41

calculator you can get to it at davh

play19:43

app.io um this is my actual current pdom

play19:46

um at least as caused by artificial and

play19:48

super artificial super intelligence so

play19:51

the it basically you plug in four values

play19:54

and it and it uh calculates what your

play19:56

pdom is uh based on Bas theorem

play19:59

um so will a ASI arrive within 10 years

play20:02

I had give it a 40% chance um will it be

play20:05

agentic and by agentic I mean will it

play20:07

seek its own goals rather than pursue

play20:09

human defined goals I haven't seen any

play20:11

evidence that that uh agentic emerges

play20:14

from these machines in fact there are a

play20:16

preponderance of papers showing ways

play20:19

that we can steer these models and I

play20:20

have not yet found any papers showing

play20:22

that models are in intrinsically

play20:25

encourageable um will it be

play20:26

uncontrollable if it is agentic then I

play20:30

don't think it'll be controllable but I

play20:32

don't think it's going to be agentic um

play20:34

and then will it be hostile to human

play20:36

existence the more time goes by the less

play20:37

concerned I am about its hostility

play20:39

because again we are selecting for

play20:40

benevolent machines right now um and

play20:43

there again has been as far as I can

play20:45

tell zero papers uh illustrating uh

play20:48

latent malevolence in these machines or

play20:51

even latent indifference one of the

play20:53

things that a lot of people in the Doom

play20:54

Community talk about is well it could

play20:56

kill us even if it's indifferent to

play20:58

humans I don't see any evidence of

play21:00

indifference emerging um I only see

play21:02

evidence of

play21:03

benevolence so my personal take is when

play21:06

I made that prediction about AGI by

play21:08

September 2024 again it was based on

play21:11

data there was things that I didn't

play21:12

consider and I also remember when Sam

play21:14

Alman said gp4 will be disappointing in

play21:17

hindsight yes gp4 has been kind of

play21:19

disappointing it's been useful and of

play21:21

course it's gotten the media attention

play21:23

um and also one thing that I was

play21:25

thinking about before I made this video

play21:27

was remember uh what Sam Alman said is

play21:29

that they actually wanted to release a

play21:31

product to get people used to the idea

play21:33

of AI before inventing AGI and that's

play21:36

one of the reasons they were

play21:37

experimenting with chat GPT they didn't

play21:40

expect chat GPT to blow up the way that

play21:42

it did um but the entire reason that

play21:44

they were exploring that product was

play21:46

because they wanted to do something that

play21:48

would say hey guys AI is actually

play21:50

happening it's time to have these

play21:51

conversations um so that people would

play21:53

have time to adapt to this new way of

play21:56

thinking and this new world that we're

play21:57

creating and it work worked um again

play22:00

that could be postao justification it

play22:03

might have just been an experiment and

play22:04

and maybe it was not you know Sam

play22:06

alman's Grand planned um it's it

play22:08

behooves Sam Alman to say ah yes that

play22:10

was on purpose um even if it was

play22:12

completely on accident um however if gp5

play22:16

lives up to the hype lives up to the

play22:17

most extreme predictions which again

play22:19

take it with a grain of salt then maybe

play22:21

my prediction was right I'm not going to

play22:23

go one way or another because I can see

play22:25

entirely too much evidence for and

play22:27

against and so basically I'm taking a

play22:29

wait and see time will tell kind of

play22:31

approach so with all that being said

play22:33

thanks for watching to the end hope you

play22:34

liked it cheers

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
GPT-5AI RumorsArtificial IntelligenceTech TrendsAGI PredictionsMachine LearningTech AnalysisInnovation HypeAI DevelopmentTech Speculation