New STUNNING Research Reveals AI In 2030...
Summary
TLDREpoch AI's report predicts significant advancements in AI, suggesting models like GPT-5 could generate over $2 billion in revenue within a year of release. The report highlights the potential for AI to automate a substantial portion of the $60 trillion global economy, with companies investing trillions in AI development. It also discusses the possibility of training runs increasing by 5,000 times by 2030, the exploration of synthetic data to overcome data scarcity, and the race for gigawatt-scale data centers. The video emphasizes the conservative yet impressive estimates for AI's future, indicating an 'incredible time' ahead in the field.
Takeaways
- 🔮 Epoch AI's report suggests that AI advancements could lead to models like GPT 5 generating over $2 billion in revenue within their first year of release.
- 🌟 The potential economic impact of AI is vast, with the possibility of automating a small portion of the $60 trillion annual economic output.
- 🚀 AI models are expected to develop agentic capabilities, allowing them to operate more independently and integrate seamlessly into existing workflows.
- 📈 The report forecasts a significant scale-up in AI models, with future models by 2030 potentially being 20,000 times more capable than current ones.
- 💹 There's a growing belief that investing trillions of dollars in AI development could be economically justified due to the enormous potential payoffs.
- 💼 Wall Street's skepticism about AI's profitability is contrasted with the long-term vision of AI's capability to revolutionize industries and generate substantial revenue.
- 💡 The report highlights the importance of AI's ability to automate tasks, with predictions that 100% of tasks could be automated by 2043.
- 🌐 Companies like Meta and Amazon are investing heavily in power and data centers, indicating a commitment to the future of AI and the infrastructure needed to support it.
- 🔋 The future of AI training involves larger models, with projections of training runs by 2030 being 5,000 times larger than those of llama 3.1, although power demand may not scale as much.
- 📊 The report discusses the potential for synthetic data to address data scarcity issues, with reinforcement learning being a method to improve the quality of AI-generated data.
Q & A
What is the main focus of Epoch AI's research initiative?
-Epoch AI's research initiative is focused on investigating trends in machine learning and forecasting the development of artificial intelligence.
What does the report by Epoch AI predict about the future of AI?
-The report predicts significant advancements in AI functionality, with newer models like GPT 5 potentially generating over $2 billion in revenue within their first year of release.
How does the report suggest AI models will integrate into existing workflows?
-The report suggests that AI models will seamlessly integrate into existing workflows, manipulate browser windows or virtual machines, and operate independently in the background.
What is the potential economic impact of AI models according to the report?
-The report suggests that if an AI model can automate a small portion of the $60 trillion economic output, generating $20 billion in economic value is plausible.
What does the report say about the scale-up from GPT 4 to GPT 6 models?
-The report indicates that GPT 4 to GPT 6 could represent a 10,000 times scale-up, and future models by 2030 could potentially be a 20,000 times scale-up.
How does the report address the concerns of Wall Street regarding AI investments?
-The report justifies the massive investments in AI by highlighting the potential for unprecedented economic growth and the possibility of capturing a fraction of the $60 trillion global labor compensation.
What is the significance of the 10x output prediction mentioned in the report?
-The 10x output prediction signifies that if AI automation replaces almost all human labor, economic growth could accelerate by tenfold or more over a few decades, increasing economic output significantly.
How does the report envision the future of training runs for AI models?
-The report envisions training runs for AI models to be longer, potentially lasting up to six months, and to be 5,000 times larger than those of llama 3.1 by 2030.
What are the potential constraints on AI development according to the report?
-The report identifies power and chip availability as the most binding constraints on AI development, with data scarcity being the most uncertain bottleneck.
How does the report address the issue of synthetic data and model collapse?
-The report discusses the use of reinforcement to improve the quality of AI-generated data, which can prevent model collapse and even lead to perfect performance in some cases.
Outlines
🤖 Epoch AI's Predictions on AI's Future Economic Impact
Epoch AI, a research initiative, has released a report detailing the future of AI and its potential economic impact. The report suggests that AI advancements, particularly in models beyond GPT-4, could lead to significant economic returns. It highlights the possibility of newer models like GPT-5 generating over $2 billion in revenue within their first year of release. The report also discusses the potential for AI to automate a portion of the $60 trillion annual economic output, emphasizing the integration of AI into existing workflows and the development of agentic capabilities, allowing AI systems to operate more independently.
💹 AI's Economic Justification and Wall Street's Skepticism
The script delves into the economic justification for investing in AI, suggesting that even a fraction of the $60 trillion global labor compensation could be captured by AI, making substantial investments economically viable. It contrasts this with Wall Street's skepticism, highlighting an article questioning AI's profitability. The speaker argues that while companies are investing heavily in AI, the potential for future returns is enormous, and Wall Street may not fully appreciate the long-term value of these investments. The script also mentions predictions of AI automating all tasks by 2043, indicating a significant shift in economic dynamics.
🚀 AI's Computational Growth and Energy Demand
This section discusses the anticipated growth in AI model training, with projections that by 2030, training runs could be 5,000 times larger than those of Llama 3.1. It addresses concerns about power demand, suggesting that while training runs may become longer to spread out energy needs, advancements in algorithms and training techniques could mitigate the need for excessively long training periods. Companies like Meta and Amazon are investing in large-scale energy sources to support future AI development, indicating a commitment to overcoming potential power constraints.
🌐 The Future of AI Training and Data Constraints
The script explores the future of AI training, considering the constraints of data availability and the potential for synthetic data to extend training capabilities. It discusses the concept of a 'data wall' and how multimodal data and synthetic data generation could help overcome this limitation. The speaker references a study that suggests reinforcement methods can improve the quality of AI-generated data, preventing model collapse. The section also includes a discussion of the largest feasible training runs given current constraints, with projections for significant increases in compute power by 2030.
📈 AI Scaling and Its Impact on Future Economic Output
The final paragraph summarizes the report's findings on AI scaling, emphasizing that by 2030, we could see AI models that are 10,000 times larger in scale than current models. It suggests that this scaling, combined with increased investment and competition among tech giants, will lead to an explosion in AI capabilities. The speaker concludes by highlighting the conservative estimates of researchers and the potential for AI to revolutionize various industries, leaving viewers with a sense of the immense possibilities that AI advancements could bring to the economy and society.
Mindmap
Keywords
💡AI Hype
💡GPT Models
💡Economic Output
💡Agentic Capability
💡Algorithmic Improvements
💡Economic Value
💡Investment in AI
💡Training Runs
💡Data Centers
💡Compute
Highlights
Epoch AI's report predicts significant advancements in AI, suggesting that AI hype is far from over.
The report forecasts that AI models like GPT 5 could generate over $2 billion in revenue within their first year of release.
AI models are expected to integrate seamlessly into existing workflows, manipulate browser windows, and operate independently.
The economic output potential of AI is highlighted, with the possibility of automating a small portion of the $60 trillion annual economic output.
The report discusses the potential for AI models to operate with agentic capability, reducing human reliance.
GPT 4 to GPT 6 models are predicted to represent a 10,000 times scale-up in AI capabilities by 2030.
Investment in AI is compared to the global labor compensation of $60 trillion, indicating the vast economic potential of AI automation.
The report addresses the skepticism on Wall Street about AI's profitability, suggesting a misunderstanding of AI's long-term potential.
Epoch AI predicts that AI could automate 100% of tasks by 2043, indicating a massive shift in the economic landscape.
The report suggests that AI development could lead to a tenfold increase in economic growth over a few decades.
Investors may redirect significant capital from traditional sectors into AI development due to the potential for unprecedented economic growth.
The report provides a conservative estimate that by 2030, training runs for AI models could be 5,000 times larger than those of llama 3.1.
Companies like Meta and Amazon are investing in large-scale energy sources to support future AI training needs.
The report discusses the potential for gigawatt-scale data centers to become feasible by 2030, supporting large AI training runs.
The report addresses the 'data wall' concern, suggesting that synthetic data generation could mitigate data scarcity constraints.
A method called reinforcement is presented as a way to improve the quality of AI-generated data, preventing model collapse.
The report concludes that by the end of the decade, we could train AI models that are 10,000 times larger, indicating a significant leap in AI capabilities.
Transcripts
Epoch AI is a research initiative
focused on investigating Trends in
machine learning and forecasting the
development of artificial intelligence
now they've recently released a report
on the future of AI and some of their
predictions are probably the most
accurate and it's rather surprising
considering what most people are saying
so essentially in this video I'll dive
into their findings and show you why the
AI hype is truly far from over and I'll
show you the actual conservative
estimates that show we in for a pretty
wild ride over the next 6 years up until
at least 20130 so one of the craziest
things that I saw from the report and
I've just you know picked up a few
things because the entire report was I
think around 60 or so Pages I'm not
exactly sure how many pages but it was
rather extensive so I decided to just
show you guys a few Snippets from that
report now one of the things that was
there was that they talk about how the
potential for sufficiently large
economic returns that could emerge from
scaling Beyond GPT 4 to a GPT 6
equivalent model coupled with
substantial algorithmic improvements and
post-training improvements it says okay
and this is the bit that I've
highlighted that this evidence might
manifest as newer models like GPT 5
generating over $2 billion in Revenue
within their first year of release now
that is absolutely incredible but I
think later on in the article they talk
about how the entire economic output is
around 60 trillion per year and they're
basically stating that look if an AI
model is able to automate a small
portion of that it being able to get $20
billion of economic value is not that
hard when you actually think about the
amount of value the economy produces so
what you can see here is that they're
talking about significant advancement in
AI functionality allowing for models to
seamlessly integrate into existing
workflows manipulate browser windows or
virtual machines and operate
independently in the background so
basically what they're talking about
here is that you know allowing models to
seamlessly integrate into existing
workflows manipulate browser windows or
virtual machines and operate
independently in the background what
they're referring to here is agentic
capability so operating independently is
where we have these systems that you
know don't longer require humans as much
now currently if we want AI systems to
perform well at nearly any task what we
have to do is we have to prompt that AI
model so we open up a chat we say hey
can you do this can you do that and then
of course we have to you know refine The
Prompt and get the AI system to do a lot
of different things now in the future
these things are going to be operating
independently in the background which
means that there's going to be quite a
lot more scale didn't mean to draw a box
there but this is going to be one of the
biggest things now the thing about this
is that if you saw another video that I
spoke about you know the trends in
machine learning and how we're going to
evolve for future models a GPT 46 to GPT
6 level equivalent model coupled with of
course as they say substantial
algorithmic improvements and post
trining that is going to be absolutely
incredible because when I looked at
another part they basically talked about
GPT 4 to GPT 6 could be a 10,000 times X
scaleup or future models by 2030 could
be entirely a 20,000 times scale up so
it's going to be super intriguing to see
how models scale up from gp4 to GPT 6
because there's going to be likely two
giant training runs there's going to be
substantial algorithmic improvements and
considering the fact that GPT 5 is
likely to be released later this year or
early next year it's going to be
interesting to see exactly what those
improvements are with every iterative
cycle so this being $20 billion of
economic revenue or economic value is
going to be absolutely incredible but
the point is is that it should show you
what is going to come in the future and
if GPT 5 could generate $20 billion in
Revenue within its first year of release
I'm wondering what future models are
going to be able to do at that time now
you can see right here like I said
before this is where we talk about the
$60 trillion economy and it says here
that the potential payoff for AI that
can automate a substantial portion of
economic task is enormous it's plausible
that an economy would invest trillions
of dollars basically stating that of
course you know it's plausible that the
economy would invest trillions of
dollars building up that stock of
computer related Capital including data
senders semiconductor fabrication plants
and lithography machines and it says of
course here the part I highlighted to
understand the scale of this potential
investment consider that Global labor
compensation is approximately $60
trillion per year basically stating that
this is how much we pay people to do
tasks that move the economy and even
without factoring accelerated economic
growth from AI automation if it becomes
feasible to develop AI capable of
effectively substituting for human labor
investing trillions of dollars to
capture even a fraction of the $60
trillion flow would be economically
Justified basically stating that look
like I said before $60 trillion okay is
a lot of money and if we get even a
slice of that like even if you get $1
trillion like think about these
companies and what they're trying to do
like this is why a lot of people can't
understand why these companies are
spending millions and millions of
dollars on AI like there was an article
recently where it's talking about okay
you know AI they're spending millions
and millions of dollars on these
training runs on these researchers but
Wall Street just can't understand the
long-term picture cuz Wall Street
they're thinking about you know cash
flow thinking about all these metrics
stock valuations but I'm going to show
you guys this article right now you can
see here it says has the AI Bubble Burst
Wall Street wonders if artificial
intelligence will ever make money and
you can see that you know there has been
one question in the minds of Wall Street
TCH earning season when will anyone
start actually making money from
artificial intelligence and in the 18
months that's kicks off the arms race
they've promised that this is poised to
re revolutionize every single industry
but like I said before of course they're
spending billions of dollars on data
senders and you know semiconductors
needed to run the AI models but like I
said you know these guys on Wall Street
are not thinking about you know
completely 2030 when things start to get
a little bit more crazy I like to think
of it like this where AI right now yes
it's having a chat gbt moment but once
you know a lot more capabilities are on
the line these AI companies are going to
become so much more valuable like the
money that they're going to make is just
going to go up and up and up and up like
that I think it's really going to be
like that of course it's probably going
to be a level level off but we're
definitely still on that sigmoid curve
where there's going to be huge G
towards the end and I think that you
know many of these um you know companies
just can't seem to Fathom that in the
future okay they're predicting that you
know even this company okay this
research organization they are
predicting that I think 100% of tasks
get automated by something like 2043 and
I mean you have to think about it okay
if the global economy is going to be
outputting $60 trillion per year I'm not
sure how much okay you know GL Global
label compensation is going to grow or
decrease by but you have to think about
it you know these top companies they're
going to be getting a lot of that value
now none of these companies make
trillions of dollars per year but you
could argue in the future that with AI
and automation that this is going to be
something for for the first time it's
probably going to happen so I do think
that those companies their valuations
are going to be you know astronomical in
the future this isn't like a stock you
know Point video but here the
researchers are saying that look
investing trillions of dollars to
capture even a fraction of the flow is
economically Justified which is what a
lot of people can't seem to think which
means that like when you think about you
know the future 2030 2040 what the years
are going to look like it truly is you
know something that's going to blow my
mind now so now this is where we talk
about 10x output so it says here that
standard economic models predict that if
AI automation reaches a point where it
replaces almost all human labor economic
growth could accelerate by tenfold or
more over just a few decades this
accelerated growth could increase
economic output by several orders of
magnitude and given this potential
achieving complete or near complete
automation earlier could be worth a
substantial portion of global output and
recognizing this imense value investors
May redirect significant portions of
their capital from traditional sectors
into AI development and essential
infrastructure such as the energy
production the distribution and the
semiconductor fabrication plants and
data centers and it says that this
potential for
unprecedented economic growth could
drive trillions of dollars in investment
in AI development now if you remember
previously earlier this year where a
certain someone a certain Sam Alman was
talking about how much money he is going
to be spending on AI and some of the
future valuations that he was talking
about you can see here it says Sam mman
has a mindboggling price tag according
to the Wall Street Journal somewhere
between 5 and 7 trillion and you can see
here that you know pretty much everyone
is clowning him it says such numbers are
Preposterous the fact that they're being
talked about with anything approaching a
straight face is indicative okay of a
degree to which the broader AI discourse
has become unmowed from reality however
we're seeing that these guys that do
research and they try to truly
understand with conservative outputs
okay where the AI growth is actually
going to be and remember this isn't some
lab that's doing like a clickbait
article they're literally just
publishing their research for anyone to
view and they're just tweeting it out
it's not like this hypey hypy thing but
what we're seeing here is that they're
also stating that you know trillions of
dollars being invested in here is not
that crazy but you can see here but you
can see here that because Sam Alman has
been seeking trillions of dollars to
reshape the business and chips of AI
many people are say think that this is
insane this is incredible look at it
guys look I mean look at the the
research guys like this is something
that they're saying that look okay when
you start to see okay how much AI you
know is going to be automating the
economy and how much you know economic
value AI is going to eat up like putting
trillions of dollars into that doesn't
seem that crazy when you you know
factorize that it says you know
recognizing this immense value investors
May redirect significant portions of
their Capital into traditional sectors
of AI development so when Sam Alman was
talking about trillions of dollars he
wasn't just completely you know going
off the rails in terms of AI hype this
is something that certain research
organizations are already starting to
talk about now this is where we talk
about some of the compute for larger
models you can see here that it says
Frontier training runs by 203 are
projected to be 5,000 times larger than
llama 3.1 and it says however we don't
expect power demand to scale as much and
this is for several different reasons
but a 5,000 times larger training run
than llama 3.1 in the next 6 years it
seems crazy but I mean you can just
imagine okay and this is actually a
conservative estimate because they do
have you know values that are on the
high end but in this writing they've
actually put the conservative estimate
because like I said before it's not like
this hype you know journalistic article
it's actually just people that are doing
research based on what they see based on
the data that they're looking at so I
mean when you actually take that into
account it seems that the future is
going to be absolutely incredible now
you can see right here it says that they
also expect training runs to be longer
okay and it says since 2010 the length
of training runs has increased by 20%
per year among notable models since we
expect power constraints to become more
pressing training run durations could
lengthen to spread out energy needs over
time of course they're talking about
many different things but basically
they're stating that training runs could
take around a year um or around you know
a few hundred days so they do state that
look it's going to be unlikely that
training runs will exceed a year as Labs
will wish to adopt better algorithms and
training techniques on the order of time
scale which these will provide
substantial performance gain so
basically saying that look no point
training it for an entire year because
by the time you finish training it
there's going to be algorithmic
improvements that you're going to need
to go ahead and retrain the model you
know completely once again so it's going
to be completely intriguing to see what
these future models are and how they're
going to be trained but you can see
right here that llama 3.1 was trained
over 72 days just over 3 months but gp4
was trained over 100 days which is
actually 3 months no this one is 2
months and this one is actually 3 months
the point is is that it's going to be
interesting to see how these training
techniques differ now one thing that we
are seeing is that companies are
starting to absolutely buy into this we
can see that meta bought the rights to a
power output of 350 megawatt solar farm
in Missouri and a 300 megawatt solar
farm in Arizona and Amazon owns a data
center campus in Pennsylvania with a
contract for 960 megawatt for the
adjoining 2.5 GW nuclear plant so you
can see that Amazon is really really
pushing the envelope when it comes to
the amount of power that they're going
to need because they are really going
all in on this stuff and you can see
here that it says that the primary
motivation behind these deals is to save
on grid connection costs and guarantee a
reliable energy Supply in the coming
years data centers might allow for
unprecedentedly large training runs to
take place and a 960 megawatt data
center would be over 35 times more power
than the 27 megaw required for today's
training runs we can see that this start
is already happening behind the scenes
these companies are ramping up for you
know 35 times more power needed than
current AI models and you can see here
that it says that you know some
companies are investigating options for
gigawatt scale data centers as you know
and and basically they're stating that
we're going to have gigawatt scale data
centers that actually seem feasible by
2030 and it says that this assessment is
supported by industry leaders and
corroborated by recent media reports
this is the CEO of next year the largest
utility company in the United States
recently stated that while finding a
site for a 5 GW AI data center would be
challenging locations capable of
supporting a 1 gaw facilities do exist
within the country so they're basically
stating that look whilst 5 gwatt AI data
centers are pretty insane a 1 gwatt data
center the facilities currently do exist
within the country and of course if you
do remember that you know openai and
Microsoft have the 2028 St star game
that will require several gaw of Power
with an expansion up to 5 gaw by 2030
now that's a huge feat and that's going
to be really difficult to accomplish but
I mean this is you know a race there's
going to be lots and lots of money
invested in this and you have to
understand that they're talking about
capturing $60 trillion of economic value
so I think a few billion dollars into
some data centers is something that
they're not going to scoff at so now you
can see here that this is where they
talk about the future training runs they
say that training runs we will presume
that they will not likely exceed six
months and we will assume that training
runs will last around 2 to nine months
on the higher end if progress in
hardware and software stalls and on the
lower end if progress accelerates
relative to day so it could be two
months or it could be 9 months so this
is pretty crazy cuz it seems that you
know it's still going to get pretty
longer and then of course this is where
we get into some incredible statistics
it says since the chinchilla scaling
laws suggest that one ought to SC scale
up data set size and model size
proportionately scaling up training data
by a factor of 30 times by using the
entirety of the index web would enable
labs to train models with 30 times more
data and 30 times more parameters
resulting in 900 times as much comput
okay if models are trained to be
chinella optimal which is absolutely
insane okay and you know people have
been saying that we've you know
exhausted all our data but we haven't
actually done that yet so can you
imagine a model being trained with 30
times more data 30 times more parameters
and 900 times more compute I mean it's
going to be truly incredible with as to
how these systems are going to be
working now like I said before many
people have spoken about this data wall
which is a thing where you know people
are thinking that okay we're going to
run out of data but you can see right
here that they say that if the recent
trend of four times a year scaling you
know continues we would run into this
data War for TCH data in about 5 years
so basically where we completely run out
of data but it also does State here that
however data from other modalities and
synthetic data generation might help
mitigate this constraint we will argue
that the multimodal data will result in
effective data stocks of about 450
trillion to 23 quadrillion tokens
allowing for impressive training runs
and of course synthetic data might
enable scaling much Beyond this if AI
labs spend a significant fraction of
their compute budgets on data generation
now the synthetic data conversation is
one that's rather interesting because
there was this recent report and
basically there was this paper that you
know actually addresses an issue with
synthetic data now basically with
synthetic data um there was this issue
called Model collapse and I need to show
you guys what this is it's not really a
real issue but this is something that
people always bring up and I'm going to
show you guys I know this isn't the best
image that you're going to see not from
the best article either but essentially
what they're stating is that you know
you have real images then those real
images produce fake images those fake
images are used to train another model
that produces even more fake images and
by the fourth iteration you have a
system that collapses essentially um and
basically they're saying that you know
this lack of human data is going to
limit AI progress however um what these
studies uh show is that models that are
just you know completely just trained on
their own data again and again and again
they weren't really you know filtering
like with humans and stuff like that um
this is why I'm talking about this paper
cuz this paper came out recently um and
this B basically you know um they've had
a new method and this method is called
reinforcement to improve the quality of
AI generated data and this involves
having a system which could be a human
or an AI which checks the generated data
and then only selects the best examples
for training future models and basically
they provide mathematical proof that
under certain conditions using
reinforced data can prevent model
collapse and even lead to perfect
performance in some cases so without
reinforcement training on AI generated
data would indeed lead to worse
performance which is model collapse but
with reinforcement and selecting the
best AI generated data they could
prevent model collapse and sometimes
even improve model performance beyond
the original model and the quality of
both the data and the generator and the
reinforcement system are important for
good results so whilst many people are
thinking that synthetic data is simply
this hole that is never going to be
filled there is a lot of research that
is is out there that suggests that this
isn't the truth now what we also do have
is this graph that shows us the largest
feasible training runs given the
different constraints many people you
know talk about AI hype and they talk
about how AI is just complete overly
hyped in terms of the future progress
but like I said before these are people
who've researched the stuff and they
said that this is what the largest
feasible training runs are given the
actual different constraints so we have
different constraints here we've got the
power constraints which are you know the
energy supplies of course we've got the
chip production capacity which is NVIDIA
being able to even produce enough chips
recently we had news that there were
delays on I think the b200s and of
course we've got the data scarcity and
of course the latency wall now you can
see here that they state that the most
binding constraints are power and Chip
availability and you can see that
essentially these are the ones here that
are pretty crazy but you can see that it
says that data stands out as the most
uncertain bottleneck with its
uncertainty SP a range of four orders of
magnitude you can see on the graph here
that data is all the way down here and
it's all the way up here so they're not
sure but you can see that by 2030 this
is where they expect things to be and
I'm going to show you guys another image
that basically explains everything but
basically the worst case scenario okay
like the literal worst case scenario is
that we have systems that are you know
10,000 times greater in terms of the
scale so this is you know pretty insane
when you actually think about it you can
see that there are other areas where we
could get to 50 ,000 times greater you
know chip capacity 880,000 times greater
a million times greater in terms of the
latency but um yeah it just shows us
that you know by 2030 things are going
to get rather incredible and I mean this
is taking you know the average you know
of all of these and then of course you
can see it's brought it down here so
it's not like the highest the complete
highest but we can see that the 2030
compute projection shows we're going to
have 10,000 times more compute to train
these models by 2030 which means I'm not
like that that there's going to be just
like an explosion in terms of these
models are going to be in terms of their
effect now the takeaway from this that
you should think about is basically
they're stating that by the end of the
decade so by 20130 we're going to be
able to train a model that is 10,000
times larger because gpt2 to GPT 4 if
you remember gpt2 to GPT 4 that scale
was 10,000 times larger and they're
basically saying that we're going to be
able to do that by the end of the decade
so if you can imagine that with all the
progress that we've had just in the past
three years which has been quite a lot
but now with all the investment now with
all the money now with all the eyes on
AI with all the major players in
robotics with all of the companies en
thropic Google Amazon with all of those
companies competing the fact that we're
also going to have 10,000 times more
computer available and the fact that by
that time in 2030 we're going to be able
to train a model that is going to be
10,000 times larger in scale what kinds
of systems are we going to have in place
I mean it's going to be pretty crazy but
I think this you know should let you
understand that like even even in the
conservative estimates of these people
that have done the research it shows us
that we're going to have a incredible
time in terms of AI so hopefully this
video educated you guys a little bit in
terms of you know how the future is
going to be in terms of compute the full
thing is actually really long you can
see here that I'm scrolling down for
quite some time it's called can AI
scaling continue through 2030 and it
says we investigate the scalability of
AI training runs we identify all of the
stuff but you can see right here that
guys this is something that is really
really long I read through this entire
thing it's super super detailed super
super they've got so many different re
um you know people that have done
research on this and you can see that
all of the sources are cited here you
can you know walk through on the right
hand side click through different things
sometimes are images but if you do want
to do this link will be in the
description if you guys have any
comments down below let me know what you
think about this and if the future is
going to be crazy and I'll see you guys
in the next one
Weitere ähnliche Videos ansehen
Energy, not compute, will be the #1 bottleneck to AI progress – Mark Zuckerberg
11. OpenAI and Llama Index - Financial News Analysis
Wo wir in Sachen KI wirklich stehen und was uns erwartet: Deep Dive mit Philipp "Pip" Klöckner
BIG AI NEWS: 10,000X Bigger Than GPT-4, AGI 2025, New Boston Dynamics Demo And More
Elon Musk on HUMAN WILL vs AGI and Humanoid Robots...
OpenAI'S "SECRET MODEL" Just LEAKED! (GPT-5 Release Date, Agents And More)
5.0 / 5 (0 votes)