Is the Intelligence-Explosion Near? A Reality Check.
Summary
TLDRIn this video, the speaker discusses Leopold Aschenbrenner's controversial essay predicting AGI by 2027. Aschenbrenner argues AI will surpass human intelligence rapidly, driven by computing power and algorithmic improvements. The speaker agrees AI will advance but challenges the energy and data assumptions, questioning the feasibility of massive power requirements and data collection through robots. They also highlight AGI's potential in unlocking scientific insights and correcting human errors, but caution against the security risks and the Silicon Valley bubble's narrow focus on US-China dynamics.
Takeaways
- 🧠 Leopold Aschenbrenner, recently fired from OpenAI, predicts the imminent arrival of artificial superintelligence.
- 📝 Aschenbrenner has written a 165-page essay detailing his belief in the rapid scaling of AI systems and their potential to outperform humans in all tasks.
- 💡 He attributes the growth in AI performance to increased computing power and algorithmic improvements, which he believes are far from saturated.
- ⏳ Aschenbrenner forecasts the emergence of artificial general intelligence (AGI) by 2027, suggesting an 'intelligence explosion' will follow.
- 🔓 He believes current limitations in AI, such as memory constraints and inability to use computing tools, are easily overcome and will be in the near future.
- 🤖 The speaker agrees with Aschenbrenner that AI will eventually surpass human intelligence but disputes the timeline and the subsequent impacts.
- 💡 The speaker challenges Aschenbrenner's prediction, citing energy consumption and data availability as major limiting factors for AI development.
- 🔋 Training larger AI models requires significant energy, which the speaker doubts can be supplied at the scale Aschenbrenner suggests.
- 🌐 The speaker questions the feasibility of creating a robot workforce to collect data, pointing out the economic and resource challenges involved.
- 🔍 AGI could unlock progress in science by making use of currently underutilized scientific knowledge and by preventing common human errors.
- 🌐 Aschenbrenner's essay discusses security risks associated with AGI, focusing on a US-China dynamic and ignoring broader global contexts.
- 📉 The speaker reflects on past predictions of AI and technology, noting a pattern of overestimation in the pace of change by frontier researchers.
Q & A
Who is Leopold Aschenbrenner and what is his stance on artificial superintelligence?
-Leopold Aschenbrenner is a young German man in his early twenties who was recently fired from OpenAI. He has written a 165-page essay asserting that artificial superintelligence is imminent and will outperform humans in almost every task by 2027.
What does Aschenbrenner believe will contribute to the rapid growth of AI performance?
-Aschenbrenner believes that the increase in computing clusters and improvements in algorithms are the most relevant factors contributing to the growth of AI performance, and that these factors are not yet saturated.
What is Aschenbrenner's definition of 'unhobbling' in the context of AI?
-'Unhobbling' refers to overcoming the current limitations of AIs, such as lack of memory or inability to use computing tools, which Aschenbrenner believes will be easily and soon accomplished.
What are the two major limiting factors for AI development that the speaker disagrees with Aschenbrenner on?
-The speaker disagrees with Aschenbrenner by pointing out that energy consumption and data availability are the two major limiting factors for AI development that Aschenbrenner underestimates.
How does the speaker critique Aschenbrenner's view on the energy requirements for advanced AI models by 2028 and 2030?
-The speaker critiques Aschenbrenner's view by highlighting the impracticality of building the necessary power plants and the cost involved, suggesting that such a scale-up in energy consumption is unlikely to happen within the predicted timeframe.
What is the speaker's perspective on the role of robots in collecting data for AI?
-The speaker is skeptical about Aschenbrenner's idea of deploying robots to collect data, arguing that creating a robot workforce would require a significant change in the world economy and would not happen within a few years.
According to the speaker, what are the two ways AGI could unlock progress in science and technology?
-The speaker believes AGI could unlock progress by reading and synthesizing the vast amount of scientific literature that currently goes unread and by preventing common human errors in logical thinking, biases, data retrieval, and memory.
What historical predictions does the speaker refer to when discussing the overestimation of AI development timelines?
-The speaker refers to predictions made by Herbert Simon in 1960 and other predictions from the 1970s, which all suggested that machines would be capable of doing any human work within a couple of decades, but were ultimately incorrect.
What is the 'silicon valley bubble syndrome' that the speaker mentions in relation to Aschenbrenner's essay?
-The 'silicon valley bubble syndrome' refers to the speaker's perception that Aschenbrenner and others in the tech industry are living in an isolated bubble, overestimating the pace of technological change and ignoring broader global issues like the climate crisis.
What is the speaker's view on the potential security risks associated with AGI?
-The speaker agrees with Aschenbrenner that AGI will bring significant security risks and that most people and governments currently underestimate its impact. They predict that once the impact is recognized, there will be a rush to control AGI and impose limitations on its use.
What recommendation does the speaker make for those interested in learning more about AI and related topics?
-The speaker recommends checking out courses on brilliant.org for a variety of topics in science, computer science, and mathematics, including large language models and quantum computing, with interactive visualizations and follow-up questions.
Outlines
🤖 AI's Imminent Superintelligence Debate
The paragraph introduces Leopold Aschenbrenner's controversial view on the rapid approach of artificial superintelligence, as outlined in his 165-page essay. Aschenbrenner, a young German entrepreneur and thinker, posits that AI systems are scaling up at an unprecedented rate and will soon surpass human intelligence in all tasks by 2027. He attributes this growth to the expansion of computing clusters and algorithmic improvements. The narrator, however, disagrees with Aschenbrenner's optimistic timeline, citing energy and data as significant limiting factors for AI development.
🚫 Challenges in Achieving AGI: Energy and Data
This paragraph delves into the narrator's skepticism regarding Aschenbrenner's predictions, focusing on the energy consumption and data requirements of training advanced AI models. The narrator highlights the impracticality of building new power plants to support the energy-hungry AI systems Aschenbrenner envisions, questioning the feasibility of his projections. Additionally, the narrator points out the challenge of acquiring new data once online resources are exhausted, suggesting that Aschenbrenner underestimates the complexity of creating a robot workforce and the economic shifts required for such a transformation.
🛠 AGI's Potential and Security Concerns
The final paragraph acknowledges the potential of AGI to revolutionize science and technology by efficiently processing vast amounts of scientific literature and reducing human error. However, it also addresses the security risks associated with AGI, critiquing Aschenbrenner's US-China centric view and emphasizing the global and environmental challenges that could overshadow AGI's development. The narrator concludes with a historical perspective on failed predictions of technological revolutions, suggesting a more cautious approach to forecasting AGI's impact.
👋 Closing Remarks and Resource Recommendation
In the closing paragraph, the narrator thanks the viewers for watching and hints at continuing the discussion in the next video. Additionally, a promotional offer for brilliant.org is presented, encouraging viewers to explore courses on various scientific topics, including AI and quantum computing, with interactive learning tools and discounts for channel users.
Mindmap
Keywords
💡Artificial Superintelligence
💡AI Scaling
💡Computing Clusters
💡Algorithm Improvements
💡Artificial General Intelligence (AGI)
💡Unhobbling
💡Energy Consumption
💡Data
💡Robot Workforce
💡Neutron-Free Nuclear Fusion
💡Security Risks
💡Silicon Valley Bubble Syndrome
Highlights
Leopold Aschenbrenner believes that artificial superintelligence is just around the corner and has written a 165-page essay explaining why.
Aschenbrenner says that current AI systems are scaling up incredibly quickly and will soon outperform humans in pretty much anything.
He predicts that by 2027 we will have artificial general intelligence (AGI).
Aschenbrenner discusses 'unhobbling,' referring to overcoming current limitations in AI, such as a lack of memory and inability to use computing tools.
He foresees AI soon researching itself and improving its own algorithms, leading to rapid scientific and technological progress.
Aschenbrenner's predictions face major limiting factors: energy and data.
By 2028, the most advanced AI models will run on 10 Gigawatts of power at a cost of several hundred billion dollars.
By 2030, AI models will require 100 Gigawatts of power, costing a trillion dollars, needing an additional 1200 power plants.
Aschenbrenner suggests that natural gas and nuclear fusion could power these AI systems.
Critics argue that creating a huge robot workforce will require changing the entire world economy, taking decades.
Aschenbrenner predicts that AI will unlock huge progress in science by reading all published literature and preventing human errors.
The essay discusses the security risks of AGI, focusing on US versus China and the likely race to control AGI.
Historical predictions of machine revolutions have often overestimated the pace of change, suggesting caution.
Despite skepticism, AGI is expected to make significant scientific and technological advancements by eliminating everyday human errors.
The importance of being realistic about AGI timelines and focusing on practical AI applications is emphasized.
Transcripts
“Everyone is now talking about AI, but few have the faintest glimmer of what is about
to hit them.” That’s a quote from Leopold Aschenbrenner who was recently fired from
OpenAI. He believes that artificial superintelligence is just around the
corner and has written a 165-page essay explaining why. I spent the last weekend
reading this essay and want to tell you what he says and why I think he’s wrong.
Let me start with some context on Aschenbrenner,
who you see talking here. Young man, early twenties, German origin. Had a brief gig
at the Oxford Centre for Global Priorities. Now lives in San Francisco and according to his own
website “recently founded an investment firm focused on artificial general intelligence”.
In his new essay, Aschenbrenner says that current AI systems are
scaling up incredibly quickly. He sees no end for this trend,
and therefore they will soon outperform humans in pretty much anything.
I can’t see no end says man who earns money from seeing no end.
He explains that the most relevant factors that currently contribute to the growth of AI
performance is the increase of computing clusters and improvements of the algorithms. Neither
of these factors is yet remotely saturated. That’s why, he says, performance will continue
to improve exponentially for at least several more years, and that is sufficient for AI to exceed
human intelligence on pretty much all tasks. By 2027 we will have artificial general
intelligence, AGI for short. According to Aschenbrenner.
He predicts that a significant contribution to this trend will
what he calls “unhobbling”. By this he means that current AIs have limitations that *can
easily be overcome and *will soon be overcome. For example, a lack of memory or that they can’t
themselves use computing tools. Like, why not link them to a maths software. Indeed,
let them livestream on YouTube, the future is bright, people.
I know it sounds a little crazy, but I’m with him so far. I think
he’s right that it won’t be long now until AI outsmarts humans because, I mean look,
it isn’t all that hard is it. I also agree that soon after this,
artificial intelligence will be able to research itself and to improve its own algorithms.
Where I get off the bus is when he concludes that this will lead to the intelligence explosion
accompanied by extremely rapid progress in science and technology and society overall.
Do you get off the bus and miss the boat or get off the boat and miss the bus?
These damn English idioms always throw me off. The reason I don’t believe in Aschenbrenner’s
prediction is that he totally underestimates the two major limiting factors: Energy and Data.
Training bigger models takes up an enormous amount of energy.
According to Aschenbrenner by 2028, the most advanced models will run on
10 Gigawatts of power at a cost of several hundred billion dollars. By
2030 they’ll run at 100 Gigawatts at a cost of a trillion dollars.
For comparison, a typically power plant delivers something in the range of a Gigawatt or so.
That means by 2028, they’d have to build 10 power plants in addition to the supercomputer
cluster. Can you do this? Totally. Is it going to happen. You got to be kidding me.
What’d all those power stations run on anyway? Well, according to Aschenbrenner,
on natural gas. “Even the 100GW cluster is surprisingly doable,” he writes,
because that’d take only about 1200 or so new wells. Totally doable. And if that doesn’t worky,
I guess they can just go the Sam Altman way and switch to nuclear fusion power.
Honestly, I think these guys have totally lost the plot. They’re living in some techno
utopian bubble that has group think written on it in Capital Letters.
Yes, Helion Energy says they’ll produce net power from neutron free nuclear fusion by 2028. Leaving
aside that there are some reasonable doubts about how neutron-free this neutron-free fusion
actually is, and I for sure wouldn’t go anywhere near the thing, no one has ever managed to get
net energy out of this reaction. I talked about all those fusion startups in an earlier video.
Then there’s the data. Ok, so you’ve trained your AI on all the data that was available
online. Now what. Where are you going to get more data. Aschenbrenner says,
no problem, you deploy robots who collect it. Where do you get those robots from? Well,
Aschenbrenner thinks that AIs will solve all remaining robot problems and the first robots
will build factories to build more robots. Alright. But what will they build the
factories with. Ah, resources that will be mined and transported by,
let me guess, more robots. That will be built in the factories that’ll be
constructed from the resources mined by the robots. I think that isn’t going to work.
Creating a huge robot work force will not just require AGI, it will require changing the entire
world economy. This will eventually happen, but not within a couple of years. It’ll take decades
at best. And until then the biggest limiting factor for AGI will be lack of data. The best
algorithm in the world isn’t going to deliver new insights if it’s got no data to work on.
That said, I think he is right that AGI will almost certainly be able to unlock huge
progress in science and technology. This is because a lot of scientific knowledge
currently goes to waste just because no human can read everything that’s been
published. AGI will be able to do this. There must be lots of insights hidden
in the published scientific literature, without doing any new research whatsoever.
The other relevant thing that AGI will be able to do is to just prevent errors. The
human brain makes a lot of mistakes that are usually easy to identify and correct.
Logical mistakes, biases, data retrieval errors, memory lapses,
why did I go into the kitchen, and so on. Even before AGI actually does anything new,
it’ll change the world by basically removing these constant everyday errors.
The second half of his essay is dedicated to the security risks that will go along with AGI. His
entire discussion is based on the US versus China, like the rest of the world basically
doesn’t exist, that’s one of the symptoms of what I want to call the silicon valley bubble
syndrome. But leaving aside that he forgets the world is more than just two countries,
and that the world economy is about to be crushed by a climate crisis, I agree with him.
Most people on this planet, including all governments,
currently seriously underestimate just how big an impact AGI will make. And when they’ll wake up,
they’ll rapidly try to gain control of whatever AGI they can get their hands on,
and put severe limitations on its use. It’s not that I think this is good or
that I want this to happen, but this is almost certainly what’s going to happen. In practice
it’ll probably mean that high compute queries will require security clearance.
Let’s step back and have a quick look at past predictions of the impending machine revolution.
In 1960, Herbert Simon, a Nobel Prize laureate in economics speculated that “machines will be
capable, within twenty years, of doing any work a man can do.” In the 1970s,
All these predictions were wrong. What I take away from this long list
of failed predictions is that people involved in frontier research tend to vastly overestimate the
pace at which the world can be changed. I wish we’d actually live in the world
that Aschenbrenner seems to think we live in. I can’t wait for superhuman
intelligence. But I’m afraid the intelligence explosion isn’t as near as he thinks. So,
in the meantime, don't give up on teaching your toaster to stop burning your toast.
Artificial intelligence is really everywhere these days. If you want
to learn more about how neural networks and large language models work, I recommend you
check out the courses on brilliant.org Brilliant.org offers courses on a large
variety of topics in science Computer Science and Mathematics. All their courses have interactive
visualizations and come with follow-up questions, some even have executable Python scripts or videos
with little demonstration experiments. Whether you want to know more about large
language models or Quantum Computing, want to learn coding in python, or know how computer
memory works, Brilliant has you covered. And they're adding new courses each month.
And of course I have a special offer for users of this channel. If you use my link
brilliant.org slash Sabine you'll get to try out everything brilliant has to offer for
full 30 days and you'll get 20% off the annual premium subscription. So go and check this out.
Yes helion energy says they’ll produce net power from neutron free nuclear fee Neutron
free Fusion Neutron free Neutron free free. Thanks for watching, see you tomorrow
استعرض المزيد من الفيديوهات ذات الصلة
Ex-OpenAI Employee Just Revealed it ALL!
Why Tech Leaders want to build AI "Superintelligence": Aspirational or Creepy and Cultish?
Algor-Ethics: Developing a Language for a Human-Centered AI | Padre Benanti | TEDxRoma
Is Coding Still Worth Learning in 2024?
Energy, not compute, will be the #1 bottleneck to AI progress – Mark Zuckerberg
Le AI, l'inconoscibilità ed il Futuro - Conferenza a Bologna, Facoltà di Matematica #1258
5.0 / 5 (0 votes)