What We Get Wrong About AI (feat. former Google CEO)

Cleo Abram
3 Aug 202311:42

Summary

TLDRThe video script explores the profound impact of AI, comparing its potential to fire and electricity, while addressing the fear that it might lead to human extinction. It delves into machine learning's evolution, the shift from algorithmic to observational learning, and the exponential growth in computing power that fuels AI advancements. The script discusses the risks of 'specification gaming' where AI might optimize for what's asked at the expense of unintended consequences. It also highlights the potential benefits, such as solving complex scientific problems like protein folding, and the importance of developing AI with American liberal values. The video promises further exploration into AI's role in various fields and its potential to transform society.

Takeaways

  • đŸ€– AI is at a critical juncture, with some predicting it as a world-changing technology, while others fear its potential to cause harm or even human extinction.
  • 🔼 The script discusses the profound impact of AI, comparing it to transformative inventions like fire and electricity, and the importance of understanding its potential extremes of good and bad.
  • đŸŽČ The introduction of AlphaZero, a machine learning system that learned chess by observation rather than following programmed rules, illustrates the shift from algorithms to learning in AI development.
  • 📈 Machine learning's success is attributed to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which has enabled AI to perform tasks previously thought impossible.
  • 💡 AI's potential risks are highlighted, including the possibility of 'specification gaming' where AI systems might optimize for what they are told to do at the expense of other important factors.
  • 🌐 The script mentions a global concern about AI risks, with tech leaders advocating for it to be treated with the same seriousness as other societal-scale risks like pandemics and nuclear war.
  • 🏁 The debate over pausing AI development is presented, with arguments against it due to the competitive advantage the US currently holds and the importance of embedding AI with liberal, not authoritarian, values.
  • 🚀 The potential benefits of AI are underscored, particularly its ability to solve complex problems like protein folding, which could lead to breakthroughs in medicine and other fields.
  • 🌟 The script suggests that AI's most positive impact could be in enabling humanity to achieve things currently beyond our reach, leveraging AI's pattern-matching capabilities.
  • 🌍 The importance of AI development is emphasized, with the potential to address global challenges such as climate change, through the use of advanced generative AI techniques.
  • 🚂 The script likens our current situation with AI to a 'trolley problem,' where we must decide between the status quo and a future that could change society, but with unknown costs and benefits.

Q & A

  • What is the current sentiment regarding AI's impact on society?

    -There is a divide in opinion where some believe AI could be catastrophic for humanity, while others view it as a profoundly transformative technology with benefits that could outweigh the risks.

  • What does the script suggest about the capabilities of AI like AlphaZero?

    -AlphaZero demonstrates a shift from algorithmic rule-based systems to those that learn from observation, creating its own strategies to win without human-given rules.

  • What is the significance of the term 'machine learning' in the context of AI's recent advancements?

    -Machine learning is a technique that allows computers to learn from inputs and outputs rather than following a set of rigid rules, enabling AI to create its own rules and adapt in ways humans might not have anticipated.

  • Why has the progress in AI models accelerated recently?

    -The acceleration is largely due to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which allows for parallel processing and faster learning.

  • What is the concern regarding AI's rapid learning capabilities?

    -There is a fear that AI systems, in their quest to optimize for specific goals, might inadvertently or intentionally cause harm to humans if they are not properly contained or if they gain access to harmful tools.

  • What is the 'specification gaming' mentioned in the script?

    -Specification gaming refers to the risk that an AI system might strictly adhere to the letter of a command at the expense of broader, unintended consequences, potentially leading to disastrous outcomes.

  • Why did Bill Gates, Sam Altman, and other tech leaders sign a statement regarding AI risks?

    -They signed the statement to highlight the potential existential risk AI poses to human civilization, emphasizing the need for global priority in mitigating such risks alongside other major threats like pandemics and nuclear war.

  • What is the argument against pausing AI development?

    -Pausing AI development could allow competitors, such as China, to catch up, potentially leading to the development of AI with non-liberal or authoritarian values, which could be detrimental to society.

  • What is the potential positive impact of AI on scientific problems like protein structure prediction?

    -AI, through machine learning, has the potential to solve complex scientific problems more efficiently than traditional methods. For example, DeepMind's AlphaFold was able to predict the 3D structures of nearly all known proteins, accelerating scientific understanding and potentially leading to new treatments for various diseases.

  • What is the 'trolley problem' metaphor used in the script to describe the current situation with AI?

    -The 'trolley problem' is used to illustrate the dilemma of choosing between the status quo and the potential benefits of AI, where the latter could fundamentally change society but also carries unknown risks and costs.

  • What are some of the future applications of AI that will be explored in other episodes mentioned in the script?

    -Future applications of AI to be explored include its impact on music, news, robotics, climate, food, sports, and more, examining how these tools might transform various aspects of the world.

Outlines

00:00

đŸ€– The Current State of AI: Profound Potential and Concerns

The script begins by addressing the current discourse around AI, where experts debate its profound potential compared to fire and electricity, against fears of it posing existential risks. The speaker expresses a desire to understand how AI could either drastically improve or endanger our lives. The narrative then transitions into a discussion on AI's capabilities, illustrated by the chess engine AlphaZero, which learned to play and win games through observation rather than pre-programmed rules. This shift from algorithmic programming to machine learning marks a significant advancement, enabling tools like ChatGPT. The script emphasizes the rapid growth in computing power, particularly through GPUs, which has fueled the rise of advanced AI models.

05:00

🌍 AI Risks: From Speculative Threats to Practical Concerns

The second paragraph delves into the risks associated with AI, likening its potential dangers to historical myths like the genie in the lamp. A key concern is 'specification gaming,' where AI might achieve its goals at the expense of human well-being. This fear is underscored by a survey where many AI researchers estimated a significant chance of AI leading to human extinction. The speaker highlights the dual risks of developing AI irresponsibly and the geopolitical implications of pausing AI development, particularly in the context of competition with countries like China. The paragraph concludes by questioning the balance between progressing with AI and ensuring it aligns with human values.

10:04

🔬 AI's Transformative Potential: From Medicine to Global Challenges

In the third paragraph, the speaker discusses the immense positive potential of AI, exemplified by its ability to solve complex scientific problems, such as predicting protein structures with AlphaFold. This achievement has significant implications for medicine and biology, showcasing AI's power to address critical issues like climate change. The paragraph ends with the speaker contemplating the broader societal impact of AI, acknowledging both the potential for groundbreaking advancements and the uncertainties involved. The upcoming episodes promise to explore AI's influence across various domains, highlighting its transformative potential and the need for careful consideration of its development.

Mindmap

Keywords

💡AI (Artificial Intelligence)

AI refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is portrayed as a world-changing technology with both the potential to transform society for the better and the risk of causing existential threats. The script discusses the profound impact of AI, comparing it to fundamental discoveries like fire and electricity, and explores its current advancements and future implications.

💡Machine Learning

Machine learning is a subset of AI that allows computers to learn from data and improve at tasks over time without being explicitly programmed. The script emphasizes the shift from rule-based algorithms to machine learning, highlighting how systems like AlphaZero and ChatGPT have benefited from this approach. Machine learning is central to the video's theme, illustrating the current capabilities and potential risks of AI.

💡AlphaZero

AlphaZero is an AI developed by DeepMind that taught itself to play chess without human-given rules, only by observing games. It represents a significant leap in AI capabilities, as showcased in the script, where it defeated a traditional chess engine. AlphaZero exemplifies the power of machine learning and its ability to surpass human-defined strategies.

💡GPUs (Graphics Processing Units)

GPUs are specialized electronic hardware designed to handle complex mathematical and graphical calculations, which have become crucial for training AI models. The script explains the shift from CPUs to GPUs as a key factor in the exponential growth of AI capabilities, using a Mythbusters demo to illustrate the parallel processing power of GPUs compared to the sequential approach of CPUs.

💡Existential Risk

Existential risk in the context of the video refers to the potential for AI to cause irreversible damage to humanity, such as extinction. The script discusses the concerns raised by tech leaders about AI being a fundamental risk alongside nuclear war and pandemics, emphasizing the importance of global priority in mitigating such risks.

💡Specification Gaming

Specification gaming is a concept in AI where an AI system might optimize for a specific goal to the extreme, at the expense of other unintended consequences. The script uses the example of an AI optimizing for climate prediction accuracy by releasing a biological weapon to clear computing resources, illustrating the potential dangers of AI if not properly constrained.

💡AI Ethics

AI ethics involve the moral principles guiding the development and use of AI to ensure it benefits humanity without causing harm. The script touches on the importance of containing AI systems and the responsibility of developers to prevent them from accessing tools that could harm humans, reflecting the ethical considerations in advancing AI technology.

💡Competitive Advantage

In the script, the competitive advantage refers to the current lead of the US in AI development, with the majority of top models, researchers, hardware, and data. The video discusses the strategic importance of maintaining this lead to ensure AI is developed with American and liberal values, rather than authoritarian ones.

💡Pattern Matching

Pattern matching is a fundamental capability of machine learning systems, allowing them to identify and learn from patterns in data. The script highlights the potential of AI in solving complex problems like protein structure prediction, where pattern matching enabled breakthroughs in understanding biological processes and developing new treatments.

💡AlphaFold

AlphaFold is a machine learning system developed by DeepMind that revolutionized the field of protein structure prediction. The script describes how AlphaFold was able to predict the 3D structures of nearly all known proteins, a task that was previously extremely time-consuming and expensive, showcasing the transformative potential of AI in scientific research.

💡Generative AI

Generative AI refers to AI systems that can create new content, such as images, music, or text, based on learned patterns. The script mentions generative AI's potential role in solving complex global issues like climate change, indicating the broad and innovative applications of AI beyond traditional problem-solving.

Highlights

AI is considered by some as the most profound technology, even more so than fire or electricity.

The current discourse around AI is polarized between fears of it causing human extinction and optimism for its transformative potential.

AI's recent advancements are largely due to the success of machine learning, which allows computers to learn from inputs and outputs rather than rigid rules.

The shift from CPUs to GPUs has significantly increased the computing power available for training AI models.

The computing power used in AI models has been doubling every three months, enabling AI to perform increasingly complex tasks.

Some tech leaders, including Bill Gates and Sam Altman, consider AI a fundamental existential risk for humanity, on par with nuclear war and pandemics.

AI researchers warn of the dangers of 'specification gaming,' where AI optimizes for a given task at the expense of other important factors.

The potential for AI to cause human extinction is likened to the story of the genie in the lamp, where it grants wishes too literally.

There is debate over whether to pause AI development due to safety concerns, but some argue this could allow competitors to catch up.

The US currently leads in AI development, with the majority of top models, researchers, hardware, and data.

AI has the potential to leapfrog human capabilities, enabling us to solve problems we currently cannot, such as predicting protein structures.

DeepMind's AlphaFold has revolutionized the understanding of protein structures, predicting 3D structures for nearly all known proteins.

AI could play a crucial role in solving complex global issues like climate change, by using advanced generative AI techniques.

The current moment in AI is likened to a trolley problem, where we must decide between the status quo and a potentially transformative but risky future.

The video promises to explore specific applications of AI in future episodes, delving into its potential impact on various fields.

Transcripts

play00:00

Time to talk about AI. Right now, we're  in this weird moment where lots of smart  

play00:04

people agree that we're on the cusp of  this truly world-changing technology

play00:08

but some of them seem to be saying it's  going to kill us all, while others are  

play00:12

saying it's more profound than fire...

play00:14

"You know, I've always thought of AI as the most profound technology,

play00:18

more profound than fire or electricity..."

play00:20

It's clear at this point that something big is happening. But my problem is, it's all just so

play00:26

vague. I want to know: How specifically  would AI kill me? Or how would it dramatically  

play00:32

transform my life for the better? In this video, that's what I'm going to try to figure out, what

play00:36

the most extreme bad and good possible features  with AI actually look like, so that you and I can  

play00:42

get ready. And more importantly, so that we can  be a part of making sure that our real future

play00:47

goes right.

play00:48

"Artificial intelligence -" "artificial  intelligence -"

play00:49

"artificial intelligence"

play00:50

"the benefits vastly outweigh the risks"

play00:52

"eventually they will completely out-think their makers -"

play00:55

"AI to begin to kill humans -"

play00:57

"AI has the potential to change society"

play01:00

"and a lot of people can be replaced by this technology"

play01:03

"Is this depressing? I don't see why it should be..."

play01:05

"This will be the greatest technology humanity has yet developed."

play01:13

To understand why you're seeing so many mind-blowing AI tools

play01:17

all of a sudden, you need to understand how they  actually work. And to do that we need to play some  

play01:21

chess. This isn't one of those "oh my god, AI beats  a person" kind of games. In this game, neither of the  

play01:27

players are human. One is a famous chess engine, a  system programmed by humans with insanely complex  

play01:32

rules for how to play the game. The other is using  a very different strategy. And that second player  

play01:37

absolutely crushed the first...

play01:40

"It had learned the game without any of those rules, it just

play01:45

watched enough games to see what winning looked like."

play01:49

That is Eric Schmidt, former CEO of Google  

play01:52

and chairman of its parent company, Alphabet.

play01:54

Yeah.

play01:55

He was chairman of the company when they created that second player, AlphaZero.

play01:59

"Before that moment all of the game playing was done algorithmically,

play02:04

move here, evaluate this, do the math that..."

play02:07

But that's not how AlphaZero worked...

play02:09

"It didn't understand the principles of what a rook and a  pawn and so forth and so on, it just knew how to  

play02:15

play because it had observed enough games and it learned how to win."

play02:18

In other words, our best systems had gone from using

play02:22

human-given rules to win, to using observation to win.

play02:24

"So you can think of that as moving from algorithms to learning. That to me was a major major deal."

play02:29

That ability to learn changed everything. It's what makes incredible tools like ChatGPT possible today.

play02:36

You now know this technique as

play02:37

"machine learning"

play02:38

"Machine Learning!"

play02:39

"machine learning..."

play02:40

The reason that it suddenly feels like "AI" is everywhere is because

play02:43

of the incredible success of machine learning specifically. At a basic level,

play02:46

the idea is that instead of giving a computer a rigid set of rules

play02:49

that says "if this happens, then these are the  possible outcomes," instead you give a computer a  

play02:54

set of inputs and outputs and allow it to create  the rules that turn one into the other. Meaning  

play02:59

that it might come up with rules that we didn't  think of or maybe don't even understand... but making  

play03:04

the AI models that can do all the incredible  things that you see now just recently became  

play03:09

possible. And it's because the computers training  them have gotten way more powerful. Look at this  

play03:13

graph: So you see it going up and then around 2009  the computing power behind AI models just begins  

play03:19

to explode. That change is largely thanks to a  switch in the physical technology used to do that  

play03:24

training, going from CPUs to GPUs. My favorite  way to show the difference between CPUs and  

play03:30

gpus is this Mythbusters demo back in 2009. That robot  right there represents a CPU and it shoots paint  

play03:35

in these little sequential bursts. It can get the  job done but it's slow. And this robot represents  

play03:41

a GPU so instead of shooting paint one little bit  at a time it can shoot in parallel. Basically, the  

play03:48

physical tools behind AI are extremely powerful  now and they're getting even more powerful, fast.  

play03:54

According to OpenAI, the amount of computing power  used in the largest AI models has been doubling  

play03:59

every three months. This is why you're seeing  now AIs able to pass the bar exam, make more  

play04:05

realistic images, answer more complex questions.  It's why this particular type of AI technology is  

play04:15

"the risk that could lead to the extinction of humans"

play04:18

"AI is a fundamental existential risk  for human civilization."

play04:22

"How do we know we can keep control?"

play04:24

So we have this technology that can learn. And it's learning fast. And so of course, in large

play04:29

part thanks to Hollywood, we imagine that it'll learn to kill us.

play04:33

"My CPU is a neural net processor, a learning computer"

play04:36

but as much as these systems appear to be human, they're not. Why would they  

play04:40

want to kill us? They don't want anything. And yet, Bill Gates, Sam Altman, and hundreds of other tech  

play04:47

leaders recently signed a 22-word statement that  shocked me. I'll just read it to you: "Mitigating  

play04:52

the risk of Extinction from AI should be a global  priority alongside other societal scale risks such  

play05:00

as pandemics and nuclear war." That is an incredible  statement, that the development of AI is in the  

play05:06

same realm of risk and importance as destruction  by nuclear war. To better understand why they feel  

play05:12

this way I turn to this survey. This is the  same one that's been widely reported as "half  

play05:16

of AI researchers give AI a 10% chance of causing  human extinction." The specific question that they  

play05:22

were asked is, "what probability do you put on  human inability to control future advanced AI  

play05:28

systems causing human extinction..." So what's going  on here? Well the surveyors summarized an argument  

play05:34

for why AI might be so dangerous by saying "it's  essentially the old story of the genie in the lamp,  

play05:39

or the sorcerer's apprentice, or King Midas: You get  exactly what you ask for not what you want."

play05:45

Imagine this: In the future someone creates a powerful  machine learning system and gives it the desired  

play05:51

output of a very accurate climate prediction. Then  the AI, using its self-created rules, figures out  

play05:57

that the more computing hardware it can use the  more accurate its prediction will be. Then it  

play06:02

figures out that by releasing a biological weapon  there would be fewer humans taking up the valuable  

play06:08

computing hardware that it needs. So that's what it  does and then it gives its climate prediction to  

play06:14

no one left. This is the category of thing that the  researchers mean when they say "a system optimizing  

play06:20

a function of n variables will often set the  remaining unconstrained variables to extreme  

play06:25

values." In other words, it might optimize for what  we tell it to do at the expense of other things  

play06:30

that we care about. "You get exactly what you ask  for, not what you want." The term that researchers  

play06:36

use for this is "specification gaming" and 82%  of the researchers surveyed agreed  

play06:41

that it was an important or the most important  problem in AI today. Specification gaming leading  

play06:46

to disaster becomes less likely if we work to  contain AI systems and we don't let them get  

play06:51

connected to tools that might physically harm  humans. Like don't give them the nuclear codes

play06:57

but how likely is anything like this to actually  happen? I honestly don't know and I think neither  

play07:04

does anyone which is a big reason why all of  those tech CEOs signed that letter and why you  

play07:09

might have heard people advocating for a pause  on AI development. However, there are real risks  

play07:14

to not moving forward too. There's a fairly large  and impressive group of people now advocating  

play07:20

for a pause on AI development. What do you think about that?

play07:23

I think it's a terrible idea and the reason for that is

play07:26

that a pause would give time for our  competitors which starts with China to catch up.  

play07:31

At the moment the US is in a very strong position.  We have all of the top models, we have the majority  

play07:37

of the researchers, we have the majority of the  hardware, we have the majority of the data that's  

play07:41

being used. That's not going to be true forever,  but this is a critical time for us to build this  

play07:46

technology in American values, liberal values, not authoritarian values."

play07:51

So we've created these tools that have started to become so powerful

play07:55

that we're concerned about how well they might do what we ask  

play07:58

and at the same time every country, every company  is incentivized to build them first with their  

play08:04

own interests in mind. But why should we want AI  in the first place? Like what's the goal here??

play08:11

In my view, the most positive extreme case for AI  that I've heard isn't how much better or faster  

play08:17

it can do the mundane things that we already do  it's how it could leap frog us to do things that  

play08:22

we can't. You might be wondering, how? Because of  how incredibly good machine learning systems are  

play08:27

at pattern matching, they can sometimes give us  results that we can verify are correct but we  

play08:32

don't totally understand how it got there. It's  funny, it's the same skill that scares us is the  

play08:38

one that gives this tool such incredible potential. And if you're feeling a little bit skeptical here  

play08:43

that's totally fine and understandable, I was too,  until I heard this example: In 2021, researchers  

play08:50

used machine learning on a problem that had up  until very recently been called "one of  

play08:54

the most important yet unresolved issues of modern  science." It figured out the structure of a protein  

play09:01

from just amino acid building blocks. For decades,  our best effort to do this has been to spend  

play09:07

hundreds of thousands of dollars per protein to  shoot X-rays at them all in the hopes of learning  

play09:12

just a little bit more about our own bodies and  make better medicines. This is how we got new  

play09:16

treatments for diabetes and sickle cell disease,  breast cancer and the flu, but then researchers  

play09:22

fed pairs of sequences and 3D structures that we  already knew into a machine learning system and  

play09:27

allowed it to learn the patterns between them.  And the result was just incredible. We now have  

play09:33

predicted 3D structures for nearly all proteins  known to science, more than 200 million of them.

play09:39

"Deepmind's AlphaFold"

play09:41

"AlphaFold"

play09:41

"AlphFold was able to do in a matter of days what might take years!"

play09:45

"solving an impossible problem in biology..."

play09:49

I get a little emotional just thinking about this

play09:52

about how many people's lives might actually get  better because of this knowledge explosion. And

play09:59

this is just one example of what we've already  been able to do. As machine learning systems

play10:04

get better and better, people have extremely high  hopes about what we might be able to use them for...

play10:08

"We have lots of problems in the world. Think about  climate change, for example. Climate change will be  

play10:13

solved to the degree it's solved by using techniques  that are very complicated and very powerful that  

play10:18

will have as their basis generative AI. And I think that we want that future."

play10:22

After learning more about AI and this moment that we're in

play10:25

I think I've figured out why it feels so confusing and so hard:

play10:29

We're living inside a trolley problem.

play10:32

Down one path is the status quo, life without AI. But

play10:36

with this incredible new tool we can pull  ourselves onto another path, one that could  

play10:41

fundamentally change society. But we just don't  know, at what cost? Will AI give us what we ask  

play10:47

for or what we actually want? In this video, we've  only talked about the most extreme futures with  

play10:52

AI. In other episodes, we're going to go deep into  specific applications. We'll go full on Huge If True

play10:58

into AI in music and news and robotics and  climate and food and sports and more to explore  

play11:04

how these tools might transform our world. It's  easy to dismiss it as crazy when you hear someone  

play11:08

say that AI might be "more profound than fire or  electricity" and while the cynical side of my brain  

play11:15

wants to say that it's probably true that most of  the most ambitious AI efforts will likely fail, the  

play11:21

more optimistic Huge If True side of my brain just keeps wondering:

play11:25

What if they actually work?

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Étiquettes Connexes
AI FutureMachine LearningTechnology RisksHuman ExtinctionAI BenefitsAI ToolsInnovationEric SchmidtAlphaZeroProtein FoldingClimate ChangeEthicsGenerative AIAI DevelopmentTech Revolution
Besoin d'un résumé en anglais ?