Is the Intelligence-Explosion Near? A Reality Check.

Sabine Hossenfelder
13 Jun 202410:19

Summary

TLDRIn this video, the speaker discusses Leopold Aschenbrenner's controversial essay predicting AGI by 2027. Aschenbrenner argues AI will surpass human intelligence rapidly, driven by computing power and algorithmic improvements. The speaker agrees AI will advance but challenges the energy and data assumptions, questioning the feasibility of massive power requirements and data collection through robots. They also highlight AGI's potential in unlocking scientific insights and correcting human errors, but caution against the security risks and the Silicon Valley bubble's narrow focus on US-China dynamics.

Takeaways

  • 🧠 Leopold Aschenbrenner, recently fired from OpenAI, predicts the imminent arrival of artificial superintelligence.
  • 📝 Aschenbrenner has written a 165-page essay detailing his belief in the rapid scaling of AI systems and their potential to outperform humans in all tasks.
  • 💡 He attributes the growth in AI performance to increased computing power and algorithmic improvements, which he believes are far from saturated.
  • ⏳ Aschenbrenner forecasts the emergence of artificial general intelligence (AGI) by 2027, suggesting an 'intelligence explosion' will follow.
  • 🔓 He believes current limitations in AI, such as memory constraints and inability to use computing tools, are easily overcome and will be in the near future.
  • 🤖 The speaker agrees with Aschenbrenner that AI will eventually surpass human intelligence but disputes the timeline and the subsequent impacts.
  • 💡 The speaker challenges Aschenbrenner's prediction, citing energy consumption and data availability as major limiting factors for AI development.
  • 🔋 Training larger AI models requires significant energy, which the speaker doubts can be supplied at the scale Aschenbrenner suggests.
  • 🌐 The speaker questions the feasibility of creating a robot workforce to collect data, pointing out the economic and resource challenges involved.
  • 🔍 AGI could unlock progress in science by making use of currently underutilized scientific knowledge and by preventing common human errors.
  • 🌐 Aschenbrenner's essay discusses security risks associated with AGI, focusing on a US-China dynamic and ignoring broader global contexts.
  • 📉 The speaker reflects on past predictions of AI and technology, noting a pattern of overestimation in the pace of change by frontier researchers.

Q & A

  • Who is Leopold Aschenbrenner and what is his stance on artificial superintelligence?

    -Leopold Aschenbrenner is a young German man in his early twenties who was recently fired from OpenAI. He has written a 165-page essay asserting that artificial superintelligence is imminent and will outperform humans in almost every task by 2027.

  • What does Aschenbrenner believe will contribute to the rapid growth of AI performance?

    -Aschenbrenner believes that the increase in computing clusters and improvements in algorithms are the most relevant factors contributing to the growth of AI performance, and that these factors are not yet saturated.

  • What is Aschenbrenner's definition of 'unhobbling' in the context of AI?

    -'Unhobbling' refers to overcoming the current limitations of AIs, such as lack of memory or inability to use computing tools, which Aschenbrenner believes will be easily and soon accomplished.

  • What are the two major limiting factors for AI development that the speaker disagrees with Aschenbrenner on?

    -The speaker disagrees with Aschenbrenner by pointing out that energy consumption and data availability are the two major limiting factors for AI development that Aschenbrenner underestimates.

  • How does the speaker critique Aschenbrenner's view on the energy requirements for advanced AI models by 2028 and 2030?

    -The speaker critiques Aschenbrenner's view by highlighting the impracticality of building the necessary power plants and the cost involved, suggesting that such a scale-up in energy consumption is unlikely to happen within the predicted timeframe.

  • What is the speaker's perspective on the role of robots in collecting data for AI?

    -The speaker is skeptical about Aschenbrenner's idea of deploying robots to collect data, arguing that creating a robot workforce would require a significant change in the world economy and would not happen within a few years.

  • According to the speaker, what are the two ways AGI could unlock progress in science and technology?

    -The speaker believes AGI could unlock progress by reading and synthesizing the vast amount of scientific literature that currently goes unread and by preventing common human errors in logical thinking, biases, data retrieval, and memory.

  • What historical predictions does the speaker refer to when discussing the overestimation of AI development timelines?

    -The speaker refers to predictions made by Herbert Simon in 1960 and other predictions from the 1970s, which all suggested that machines would be capable of doing any human work within a couple of decades, but were ultimately incorrect.

  • What is the 'silicon valley bubble syndrome' that the speaker mentions in relation to Aschenbrenner's essay?

    -The 'silicon valley bubble syndrome' refers to the speaker's perception that Aschenbrenner and others in the tech industry are living in an isolated bubble, overestimating the pace of technological change and ignoring broader global issues like the climate crisis.

  • What is the speaker's view on the potential security risks associated with AGI?

    -The speaker agrees with Aschenbrenner that AGI will bring significant security risks and that most people and governments currently underestimate its impact. They predict that once the impact is recognized, there will be a rush to control AGI and impose limitations on its use.

  • What recommendation does the speaker make for those interested in learning more about AI and related topics?

    -The speaker recommends checking out courses on brilliant.org for a variety of topics in science, computer science, and mathematics, including large language models and quantum computing, with interactive visualizations and follow-up questions.

Outlines

00:00

🤖 AI's Imminent Superintelligence Debate

The paragraph introduces Leopold Aschenbrenner's controversial view on the rapid approach of artificial superintelligence, as outlined in his 165-page essay. Aschenbrenner, a young German entrepreneur and thinker, posits that AI systems are scaling up at an unprecedented rate and will soon surpass human intelligence in all tasks by 2027. He attributes this growth to the expansion of computing clusters and algorithmic improvements. The narrator, however, disagrees with Aschenbrenner's optimistic timeline, citing energy and data as significant limiting factors for AI development.

05:00

🚫 Challenges in Achieving AGI: Energy and Data

This paragraph delves into the narrator's skepticism regarding Aschenbrenner's predictions, focusing on the energy consumption and data requirements of training advanced AI models. The narrator highlights the impracticality of building new power plants to support the energy-hungry AI systems Aschenbrenner envisions, questioning the feasibility of his projections. Additionally, the narrator points out the challenge of acquiring new data once online resources are exhausted, suggesting that Aschenbrenner underestimates the complexity of creating a robot workforce and the economic shifts required for such a transformation.

10:00

🛠 AGI's Potential and Security Concerns

The final paragraph acknowledges the potential of AGI to revolutionize science and technology by efficiently processing vast amounts of scientific literature and reducing human error. However, it also addresses the security risks associated with AGI, critiquing Aschenbrenner's US-China centric view and emphasizing the global and environmental challenges that could overshadow AGI's development. The narrator concludes with a historical perspective on failed predictions of technological revolutions, suggesting a more cautious approach to forecasting AGI's impact.

👋 Closing Remarks and Resource Recommendation

In the closing paragraph, the narrator thanks the viewers for watching and hints at continuing the discussion in the next video. Additionally, a promotional offer for brilliant.org is presented, encouraging viewers to explore courses on various scientific topics, including AI and quantum computing, with interactive learning tools and discounts for channel users.

Mindmap

Keywords

💡Artificial Superintelligence

Artificial Superintelligence refers to a hypothetical level of artificial intelligence that far exceeds human intelligence in every aspect. In the video, Leopold Aschenbrenner believes that this level of AI is imminent and will have profound impacts on society. The script discusses the potential for AI to outperform humans in virtually all tasks, which is central to the theme of the video.

💡AI Scaling

AI Scaling refers to the rapid growth and advancement of artificial intelligence capabilities. The script mentions Aschenbrenner's view that current AI systems are scaling up incredibly quickly, suggesting an exponential improvement in performance, which is a key argument in his prediction of the arrival of artificial general intelligence (AGI).

💡Computing Clusters

Computing clusters are groups of computers that are connected and work together to perform large-scale computations. In the context of the video, the increase in computing clusters is identified as one of the main factors contributing to the growth of AI performance, highlighting the importance of computational power in advancing AI capabilities.

💡Algorithm Improvements

Algorithm improvements refer to enhancements made to the computational procedures or formulas that AI systems use to perform tasks. The script points out that improvements in algorithms, along with increased computing power, are driving the exponential growth of AI, which is a central concept in Aschenbrenner's argument for the imminent arrival of AGI.

💡Artificial General Intelligence (AGI)

Artificial General Intelligence, or AGI, is the hypothetical ability of an AI to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human. The video discusses Aschenbrenner's prediction that AGI will be achieved by 2027, which is a pivotal point in the debate presented in the script.

💡Unhobbling

In the context of the video, 'unhobbling' refers to the removal of current limitations on AI systems that can be easily overcome, such as memory constraints or the inability to use computing tools. Aschenbrenner suggests that 'unhobbling' will significantly contribute to the rapid development of AI, illustrating a key aspect of his argument for the swift arrival of AGI.

💡Energy Consumption

Energy Consumption in the context of AI refers to the large amount of power required to train and run advanced AI models. The script challenges Aschenbrenner's predictions by pointing out the enormous energy requirements for training AI models, suggesting that this could be a limiting factor for the development of AGI.

💡Data

Data, in the context of AI, refers to the information that is used to train AI models. The script argues that a lack of data could be a major limiting factor for AGI, as even the best algorithms require new data to produce insights, challenging Aschenbrenner's optimistic timeline for AGI development.

💡Robot Workforce

A Robot Workforce refers to a hypothetical scenario where robots perform tasks traditionally done by humans. The script discusses Aschenbrenner's idea that AI will solve remaining robot problems and lead to the creation of a robot workforce, which is a key part of his vision for the future of AI and its impact on society.

💡Neutron-Free Nuclear Fusion

Neutron-Free Nuclear Fusion is a theoretical form of nuclear energy production that does not produce neutrons, which are subatomic particles. The script mentions Helion Energy's claim to produce net power from this process by 2028, which is used as an example of the optimistic technological predictions that the video critiques.

💡Security Risks

Security Risks in the context of AGI refer to the potential dangers and challenges that could arise from the development and deployment of superintelligent AI systems. The script discusses Aschenbrenner's concerns about the security risks associated with AGI, which is an important consideration in the broader conversation about the future of AI.

💡Silicon Valley Bubble Syndrome

Silicon Valley Bubble Syndrome is a term used in the script to describe a perceived narrow-mindedness or insularity in the tech industry, particularly in Silicon Valley, where the rest of the world and other important issues are often overlooked. The script uses this term to critique Aschenbrenner's focus on US-China dynamics, suggesting a broader perspective is needed.

Highlights

Leopold Aschenbrenner believes that artificial superintelligence is just around the corner and has written a 165-page essay explaining why.

Aschenbrenner says that current AI systems are scaling up incredibly quickly and will soon outperform humans in pretty much anything.

He predicts that by 2027 we will have artificial general intelligence (AGI).

Aschenbrenner discusses 'unhobbling,' referring to overcoming current limitations in AI, such as a lack of memory and inability to use computing tools.

He foresees AI soon researching itself and improving its own algorithms, leading to rapid scientific and technological progress.

Aschenbrenner's predictions face major limiting factors: energy and data.

By 2028, the most advanced AI models will run on 10 Gigawatts of power at a cost of several hundred billion dollars.

By 2030, AI models will require 100 Gigawatts of power, costing a trillion dollars, needing an additional 1200 power plants.

Aschenbrenner suggests that natural gas and nuclear fusion could power these AI systems.

Critics argue that creating a huge robot workforce will require changing the entire world economy, taking decades.

Aschenbrenner predicts that AI will unlock huge progress in science by reading all published literature and preventing human errors.

The essay discusses the security risks of AGI, focusing on US versus China and the likely race to control AGI.

Historical predictions of machine revolutions have often overestimated the pace of change, suggesting caution.

Despite skepticism, AGI is expected to make significant scientific and technological advancements by eliminating everyday human errors.

The importance of being realistic about AGI timelines and focusing on practical AI applications is emphasized.

Transcripts

play00:00

“Everyone is now talking about AI, but few  have the faintest glimmer of what is about  

play00:05

to hit them.” That’s a quote from Leopold  Aschenbrenner who was recently fired from  

play00:11

OpenAI. He believes that artificial  superintelligence is just around the  

play00:16

corner and has written a 165-page essay  explaining why. I spent the last weekend  

play00:23

reading this essay and want to tell you  what he says and why I think he’s wrong.

play00:29

Let me start with some context on Aschenbrenner,  

play00:32

who you see talking here. Young man, early  twenties, German origin. Had a brief gig  

play00:38

at the Oxford Centre for Global Priorities. Now  lives in San Francisco and according to his own  

play00:44

website “recently founded an investment firm  focused on artificial general intelligence”.

play00:50

In his new essay, Aschenbrenner  says that current AI systems are  

play00:54

scaling up incredibly quickly.  He sees no end for this trend,  

play00:58

and therefore they will soon outperform  humans in pretty much anything. 

play01:03

I can’t see no end says man who  earns money from seeing no end.

play01:07

He explains that the most relevant factors  that currently contribute to the growth of AI  

play01:13

performance is the increase of computing clusters  and improvements of the algorithms. Neither  

play01:19

of these factors is yet remotely saturated. That’s why, he says, performance will continue  

play01:25

to improve exponentially for at least several more  years, and that is sufficient for AI to exceed  

play01:32

human intelligence on pretty much all tasks. By 2027 we will have artificial general  

play01:39

intelligence, AGI for short.  According to Aschenbrenner.

play01:43

He predicts that a significant  contribution to this trend will  

play01:47

what he calls “unhobbling”. By this he means  that current AIs have limitations that *can  

play01:53

easily be overcome and *will soon be overcome. For example, a lack of memory or that they can’t  

play01:59

themselves use computing tools. Like, why  not link them to a maths software. Indeed,  

play02:05

let them livestream on YouTube,  the future is bright, people. 

play02:09

I know it sounds a little crazy,  but I’m with him so far. I think  

play02:14

he’s right that it won’t be long now until  AI outsmarts humans because, I mean look,  

play02:20

it isn’t all that hard is it. I also agree that soon after this,  

play02:24

artificial intelligence will be able to research  itself and to improve its own algorithms.  

play02:31

Where I get off the bus is when he concludes  that this will lead to the intelligence explosion  

play02:37

accompanied by extremely rapid progress in  science and technology and society overall. 

play02:43

Do you get off the bus and miss the boat  or get off the boat and miss the bus?  

play02:49

These damn English idioms always throw me off. The reason I don’t believe in Aschenbrenner’s  

play02:55

prediction is that he totally underestimates  the two major limiting factors: Energy and Data.

play03:02

Training bigger models takes up  an enormous amount of energy. 

play03:07

According to Aschenbrenner by 2028,  the most advanced models will run on  

play03:13

10 Gigawatts of power at a cost of  several hundred billion dollars. By  

play03:17

2030 they’ll run at 100 Gigawatts  at a cost of a trillion dollars.

play03:23

For comparison, a typically power plant delivers  something in the range of a Gigawatt or so. 

play03:29

That means by 2028, they’d have to build 10  power plants in addition to the supercomputer  

play03:35

cluster. Can you do this? Totally. Is it  going to happen. You got to be kidding me. 

play03:43

What’d all those power stations run on  anyway? Well, according to Aschenbrenner,  

play03:48

on natural gas. “Even the 100GW cluster  is surprisingly doable,” he writes,  

play03:54

because that’d take only about 1200 or so new  wells. Totally doable. And if that doesn’t worky,  

play04:01

I guess they can just go the Sam Altman  way and switch to nuclear fusion power.

play04:06

Honestly, I think these guys have totally  lost the plot. They’re living in some techno  

play04:12

utopian bubble that has group think  written on it in Capital Letters. 

play04:17

Yes, Helion Energy says they’ll produce net power  from neutron free nuclear fusion by 2028. Leaving  

play04:24

aside that there are some reasonable doubts  about how neutron-free this neutron-free fusion  

play04:30

actually is, and I for sure wouldn’t go anywhere  near the thing, no one has ever managed to get  

play04:35

net energy out of this reaction. I talked about  all those fusion startups in an earlier video.

play04:42

Then there’s the data. Ok, so you’ve trained  your AI on all the data that was available  

play04:48

online. Now what. Where are you going  to get more data. Aschenbrenner says,  

play04:53

no problem, you deploy robots who collect it. Where do you get those robots from? Well,  

play05:00

Aschenbrenner thinks that AIs will solve all  remaining robot problems and the first robots  

play05:05

will build factories to build more robots. Alright. But what will they build the  

play05:11

factories with. Ah, resources that  will be mined and transported by,  

play05:16

let me guess, more robots. That will  be built in the factories that’ll be  

play05:21

constructed from the resources mined by the  robots. I think that isn’t going to work.

play05:27

Creating a huge robot work force will not just  require AGI, it will require changing the entire  

play05:33

world economy. This will eventually happen, but  not within a couple of years. It’ll take decades  

play05:39

at best. And until then the biggest limiting  factor for AGI will be lack of data. The best  

play05:45

algorithm in the world isn’t going to deliver  new insights if it’s got no data to work on. 

play05:51

That said, I think he is right that AGI  will almost certainly be able to unlock huge  

play05:57

progress in science and technology. This  is because a lot of scientific knowledge  

play06:02

currently goes to waste just because no  human can read everything that’s been  

play06:07

published. AGI will be able to do this.  There must be lots of insights hidden  

play06:13

in the published scientific literature,  without doing any new research whatsoever.

play06:18

The other relevant thing that AGI will be  able to do is to just prevent errors. The  

play06:24

human brain makes a lot of mistakes that  are usually easy to identify and correct.  

play06:29

Logical mistakes, biases, data  retrieval errors, memory lapses,  

play06:34

why did I go into the kitchen, and so on.  Even before AGI actually does anything new,  

play06:40

it’ll change the world by basically  removing these constant everyday errors.

play06:45

The second half of his essay is dedicated to the  security risks that will go along with AGI. His  

play06:52

entire discussion is based on the US versus  China, like the rest of the world basically  

play06:58

doesn’t exist, that’s one of the symptoms of  what I want to call the silicon valley bubble  

play07:03

syndrome. But leaving aside that he forgets  the world is more than just two countries,  

play07:08

and that the world economy is about to be  crushed by a climate crisis, I agree with him. 

play07:14

Most people on this planet,  including all governments,  

play07:18

currently seriously underestimate just how big an  impact AGI will make. And when they’ll wake up,  

play07:26

they’ll rapidly try to gain control of  whatever AGI they can get their hands on,  

play07:31

and put severe limitations on its use. It’s not that I think this is good or  

play07:36

that I want this to happen, but this is almost  certainly what’s going to happen. In practice  

play07:42

it’ll probably mean that high compute  queries will require security clearance.

play07:48

Let’s step back and have a quick look at past  predictions of the impending machine revolution. 

play07:54

In 1960, Herbert Simon, a Nobel Prize laureate  in economics speculated that “machines will be  

play08:01

capable, within twenty years, of doing  any work a man can do.” In the 1970s,  

play08:21

All these predictions were wrong. What I take away from this long list  

play08:26

of failed predictions is that people involved in  frontier research tend to vastly overestimate the  

play08:34

pace at which the world can be changed. I wish we’d actually live in the world  

play08:39

that Aschenbrenner seems to think we  live in. I can’t wait for superhuman  

play08:44

intelligence. But I’m afraid the intelligence  explosion isn’t as near as he thinks. So,  

play08:50

in the meantime, don't give up on teaching  your toaster to stop burning your toast.

play08:56

Artificial intelligence is really  everywhere these days. If you want  

play09:00

to learn more about how neural networks and  large language models work, I recommend you  

play09:05

check out the courses on brilliant.org Brilliant.org offers courses on a large  

play09:11

variety of topics in science Computer Science and  Mathematics. All their courses have interactive  

play09:17

visualizations and come with follow-up questions,  some even have executable Python scripts or videos  

play09:24

with little demonstration experiments.  Whether you want to know more about large  

play09:28

language models or Quantum Computing, want to  learn coding in python, or know how computer  

play09:35

memory works, Brilliant has you covered.  And they're adding new courses each month. 

play09:41

And of course I have a special offer for  users of this channel. If you use my link  

play09:47

brilliant.org slash Sabine you'll get to try  out everything brilliant has to offer for  

play09:52

full 30 days and you'll get 20% off the annual  premium subscription. So go and check this out. 

play10:00

Yes helion energy says they’ll produce net  power from neutron free nuclear fee Neutron  

play10:06

free Fusion Neutron free Neutron free free. Thanks for watching, see you tomorrow

Rate This

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceFuture PredictionsTech TrendsAI EthicsAGI DebateEnergy ConcernsData LimitationsInnovation AnalysisExpert OpinionSocietal ImpactAI DevelopmentResearch CritiqueEconomic FactorsTechnological AdvancementNeural NetworksLarge Language ModelsScience ProgressSecurity RisksGlobal PrioritiesSilicon Valley