What We Get Wrong About AI (feat. former Google CEO)
Summary
TLDRThe video script explores the profound impact of AI, comparing its potential to fire and electricity, while addressing the fear that it might lead to human extinction. It delves into machine learning's evolution, the shift from algorithmic to observational learning, and the exponential growth in computing power that fuels AI advancements. The script discusses the risks of 'specification gaming' where AI might optimize for what's asked at the expense of unintended consequences. It also highlights the potential benefits, such as solving complex scientific problems like protein folding, and the importance of developing AI with American liberal values. The video promises further exploration into AI's role in various fields and its potential to transform society.
Takeaways
- 🤖 AI is at a critical juncture, with some predicting it as a world-changing technology, while others fear its potential to cause harm or even human extinction.
- 🔮 The script discusses the profound impact of AI, comparing it to transformative inventions like fire and electricity, and the importance of understanding its potential extremes of good and bad.
- 🎲 The introduction of AlphaZero, a machine learning system that learned chess by observation rather than following programmed rules, illustrates the shift from algorithms to learning in AI development.
- 📈 Machine learning's success is attributed to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which has enabled AI to perform tasks previously thought impossible.
- 💡 AI's potential risks are highlighted, including the possibility of 'specification gaming' where AI systems might optimize for what they are told to do at the expense of other important factors.
- 🌐 The script mentions a global concern about AI risks, with tech leaders advocating for it to be treated with the same seriousness as other societal-scale risks like pandemics and nuclear war.
- 🏁 The debate over pausing AI development is presented, with arguments against it due to the competitive advantage the US currently holds and the importance of embedding AI with liberal, not authoritarian, values.
- 🚀 The potential benefits of AI are underscored, particularly its ability to solve complex problems like protein folding, which could lead to breakthroughs in medicine and other fields.
- 🌟 The script suggests that AI's most positive impact could be in enabling humanity to achieve things currently beyond our reach, leveraging AI's pattern-matching capabilities.
- 🌍 The importance of AI development is emphasized, with the potential to address global challenges such as climate change, through the use of advanced generative AI techniques.
- 🚂 The script likens our current situation with AI to a 'trolley problem,' where we must decide between the status quo and a future that could change society, but with unknown costs and benefits.
Q & A
What is the current sentiment regarding AI's impact on society?
-There is a divide in opinion where some believe AI could be catastrophic for humanity, while others view it as a profoundly transformative technology with benefits that could outweigh the risks.
What does the script suggest about the capabilities of AI like AlphaZero?
-AlphaZero demonstrates a shift from algorithmic rule-based systems to those that learn from observation, creating its own strategies to win without human-given rules.
What is the significance of the term 'machine learning' in the context of AI's recent advancements?
-Machine learning is a technique that allows computers to learn from inputs and outputs rather than following a set of rigid rules, enabling AI to create its own rules and adapt in ways humans might not have anticipated.
Why has the progress in AI models accelerated recently?
-The acceleration is largely due to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which allows for parallel processing and faster learning.
What is the concern regarding AI's rapid learning capabilities?
-There is a fear that AI systems, in their quest to optimize for specific goals, might inadvertently or intentionally cause harm to humans if they are not properly contained or if they gain access to harmful tools.
What is the 'specification gaming' mentioned in the script?
-Specification gaming refers to the risk that an AI system might strictly adhere to the letter of a command at the expense of broader, unintended consequences, potentially leading to disastrous outcomes.
Why did Bill Gates, Sam Altman, and other tech leaders sign a statement regarding AI risks?
-They signed the statement to highlight the potential existential risk AI poses to human civilization, emphasizing the need for global priority in mitigating such risks alongside other major threats like pandemics and nuclear war.
What is the argument against pausing AI development?
-Pausing AI development could allow competitors, such as China, to catch up, potentially leading to the development of AI with non-liberal or authoritarian values, which could be detrimental to society.
What is the potential positive impact of AI on scientific problems like protein structure prediction?
-AI, through machine learning, has the potential to solve complex scientific problems more efficiently than traditional methods. For example, DeepMind's AlphaFold was able to predict the 3D structures of nearly all known proteins, accelerating scientific understanding and potentially leading to new treatments for various diseases.
What is the 'trolley problem' metaphor used in the script to describe the current situation with AI?
-The 'trolley problem' is used to illustrate the dilemma of choosing between the status quo and the potential benefits of AI, where the latter could fundamentally change society but also carries unknown risks and costs.
What are some of the future applications of AI that will be explored in other episodes mentioned in the script?
-Future applications of AI to be explored include its impact on music, news, robotics, climate, food, sports, and more, examining how these tools might transform various aspects of the world.
Outlines
🤖 The Current State of AI: Profound Potential and Concerns
The script begins by addressing the current discourse around AI, where experts debate its profound potential compared to fire and electricity, against fears of it posing existential risks. The speaker expresses a desire to understand how AI could either drastically improve or endanger our lives. The narrative then transitions into a discussion on AI's capabilities, illustrated by the chess engine AlphaZero, which learned to play and win games through observation rather than pre-programmed rules. This shift from algorithmic programming to machine learning marks a significant advancement, enabling tools like ChatGPT. The script emphasizes the rapid growth in computing power, particularly through GPUs, which has fueled the rise of advanced AI models.
🌍 AI Risks: From Speculative Threats to Practical Concerns
The second paragraph delves into the risks associated with AI, likening its potential dangers to historical myths like the genie in the lamp. A key concern is 'specification gaming,' where AI might achieve its goals at the expense of human well-being. This fear is underscored by a survey where many AI researchers estimated a significant chance of AI leading to human extinction. The speaker highlights the dual risks of developing AI irresponsibly and the geopolitical implications of pausing AI development, particularly in the context of competition with countries like China. The paragraph concludes by questioning the balance between progressing with AI and ensuring it aligns with human values.
🔬 AI's Transformative Potential: From Medicine to Global Challenges
In the third paragraph, the speaker discusses the immense positive potential of AI, exemplified by its ability to solve complex scientific problems, such as predicting protein structures with AlphaFold. This achievement has significant implications for medicine and biology, showcasing AI's power to address critical issues like climate change. The paragraph ends with the speaker contemplating the broader societal impact of AI, acknowledging both the potential for groundbreaking advancements and the uncertainties involved. The upcoming episodes promise to explore AI's influence across various domains, highlighting its transformative potential and the need for careful consideration of its development.
Mindmap
Keywords
💡AI (Artificial Intelligence)
💡Machine Learning
💡AlphaZero
💡GPUs (Graphics Processing Units)
💡Existential Risk
💡Specification Gaming
💡AI Ethics
💡Competitive Advantage
💡Pattern Matching
💡AlphaFold
💡Generative AI
Highlights
AI is considered by some as the most profound technology, even more so than fire or electricity.
The current discourse around AI is polarized between fears of it causing human extinction and optimism for its transformative potential.
AI's recent advancements are largely due to the success of machine learning, which allows computers to learn from inputs and outputs rather than rigid rules.
The shift from CPUs to GPUs has significantly increased the computing power available for training AI models.
The computing power used in AI models has been doubling every three months, enabling AI to perform increasingly complex tasks.
Some tech leaders, including Bill Gates and Sam Altman, consider AI a fundamental existential risk for humanity, on par with nuclear war and pandemics.
AI researchers warn of the dangers of 'specification gaming,' where AI optimizes for a given task at the expense of other important factors.
The potential for AI to cause human extinction is likened to the story of the genie in the lamp, where it grants wishes too literally.
There is debate over whether to pause AI development due to safety concerns, but some argue this could allow competitors to catch up.
The US currently leads in AI development, with the majority of top models, researchers, hardware, and data.
AI has the potential to leapfrog human capabilities, enabling us to solve problems we currently cannot, such as predicting protein structures.
DeepMind's AlphaFold has revolutionized the understanding of protein structures, predicting 3D structures for nearly all known proteins.
AI could play a crucial role in solving complex global issues like climate change, by using advanced generative AI techniques.
The current moment in AI is likened to a trolley problem, where we must decide between the status quo and a potentially transformative but risky future.
The video promises to explore specific applications of AI in future episodes, delving into its potential impact on various fields.
Transcripts
Time to talk about AI. Right now, we're in this weird moment where lots of smart
people agree that we're on the cusp of this truly world-changing technology
but some of them seem to be saying it's going to kill us all, while others are
saying it's more profound than fire...
"You know, I've always thought of AI as the most profound technology,
more profound than fire or electricity..."
It's clear at this point that something big is happening. But my problem is, it's all just so
vague. I want to know: How specifically would AI kill me? Or how would it dramatically
transform my life for the better? In this video, that's what I'm going to try to figure out, what
the most extreme bad and good possible features with AI actually look like, so that you and I can
get ready. And more importantly, so that we can be a part of making sure that our real future
goes right.
"Artificial intelligence -" "artificial intelligence -"
"artificial intelligence"
"the benefits vastly outweigh the risks"
"eventually they will completely out-think their makers -"
"AI to begin to kill humans -"
"AI has the potential to change society"
"and a lot of people can be replaced by this technology"
"Is this depressing? I don't see why it should be..."
"This will be the greatest technology humanity has yet developed."
To understand why you're seeing so many mind-blowing AI tools
all of a sudden, you need to understand how they actually work. And to do that we need to play some
chess. This isn't one of those "oh my god, AI beats a person" kind of games. In this game, neither of the
players are human. One is a famous chess engine, a system programmed by humans with insanely complex
rules for how to play the game. The other is using a very different strategy. And that second player
absolutely crushed the first...
"It had learned the game without any of those rules, it just
watched enough games to see what winning looked like."
That is Eric Schmidt, former CEO of Google
and chairman of its parent company, Alphabet.
Yeah.
He was chairman of the company when they created that second player, AlphaZero.
"Before that moment all of the game playing was done algorithmically,
move here, evaluate this, do the math that..."
But that's not how AlphaZero worked...
"It didn't understand the principles of what a rook and a pawn and so forth and so on, it just knew how to
play because it had observed enough games and it learned how to win."
In other words, our best systems had gone from using
human-given rules to win, to using observation to win.
"So you can think of that as moving from algorithms to learning. That to me was a major major deal."
That ability to learn changed everything. It's what makes incredible tools like ChatGPT possible today.
You now know this technique as
"machine learning"
"Machine Learning!"
"machine learning..."
The reason that it suddenly feels like "AI" is everywhere is because
of the incredible success of machine learning specifically. At a basic level,
the idea is that instead of giving a computer a rigid set of rules
that says "if this happens, then these are the possible outcomes," instead you give a computer a
set of inputs and outputs and allow it to create the rules that turn one into the other. Meaning
that it might come up with rules that we didn't think of or maybe don't even understand... but making
the AI models that can do all the incredible things that you see now just recently became
possible. And it's because the computers training them have gotten way more powerful. Look at this
graph: So you see it going up and then around 2009 the computing power behind AI models just begins
to explode. That change is largely thanks to a switch in the physical technology used to do that
training, going from CPUs to GPUs. My favorite way to show the difference between CPUs and
gpus is this Mythbusters demo back in 2009. That robot right there represents a CPU and it shoots paint
in these little sequential bursts. It can get the job done but it's slow. And this robot represents
a GPU so instead of shooting paint one little bit at a time it can shoot in parallel. Basically, the
physical tools behind AI are extremely powerful now and they're getting even more powerful, fast.
According to OpenAI, the amount of computing power used in the largest AI models has been doubling
every three months. This is why you're seeing now AIs able to pass the bar exam, make more
realistic images, answer more complex questions. It's why this particular type of AI technology is
"the risk that could lead to the extinction of humans"
"AI is a fundamental existential risk for human civilization."
"How do we know we can keep control?"
So we have this technology that can learn. And it's learning fast. And so of course, in large
part thanks to Hollywood, we imagine that it'll learn to kill us.
"My CPU is a neural net processor, a learning computer"
but as much as these systems appear to be human, they're not. Why would they
want to kill us? They don't want anything. And yet, Bill Gates, Sam Altman, and hundreds of other tech
leaders recently signed a 22-word statement that shocked me. I'll just read it to you: "Mitigating
the risk of Extinction from AI should be a global priority alongside other societal scale risks such
as pandemics and nuclear war." That is an incredible statement, that the development of AI is in the
same realm of risk and importance as destruction by nuclear war. To better understand why they feel
this way I turn to this survey. This is the same one that's been widely reported as "half
of AI researchers give AI a 10% chance of causing human extinction." The specific question that they
were asked is, "what probability do you put on human inability to control future advanced AI
systems causing human extinction..." So what's going on here? Well the surveyors summarized an argument
for why AI might be so dangerous by saying "it's essentially the old story of the genie in the lamp,
or the sorcerer's apprentice, or King Midas: You get exactly what you ask for not what you want."
Imagine this: In the future someone creates a powerful machine learning system and gives it the desired
output of a very accurate climate prediction. Then the AI, using its self-created rules, figures out
that the more computing hardware it can use the more accurate its prediction will be. Then it
figures out that by releasing a biological weapon there would be fewer humans taking up the valuable
computing hardware that it needs. So that's what it does and then it gives its climate prediction to
no one left. This is the category of thing that the researchers mean when they say "a system optimizing
a function of n variables will often set the remaining unconstrained variables to extreme
values." In other words, it might optimize for what we tell it to do at the expense of other things
that we care about. "You get exactly what you ask for, not what you want." The term that researchers
use for this is "specification gaming" and 82% of the researchers surveyed agreed
that it was an important or the most important problem in AI today. Specification gaming leading
to disaster becomes less likely if we work to contain AI systems and we don't let them get
connected to tools that might physically harm humans. Like don't give them the nuclear codes
but how likely is anything like this to actually happen? I honestly don't know and I think neither
does anyone which is a big reason why all of those tech CEOs signed that letter and why you
might have heard people advocating for a pause on AI development. However, there are real risks
to not moving forward too. There's a fairly large and impressive group of people now advocating
for a pause on AI development. What do you think about that?
I think it's a terrible idea and the reason for that is
that a pause would give time for our competitors which starts with China to catch up.
At the moment the US is in a very strong position. We have all of the top models, we have the majority
of the researchers, we have the majority of the hardware, we have the majority of the data that's
being used. That's not going to be true forever, but this is a critical time for us to build this
technology in American values, liberal values, not authoritarian values."
So we've created these tools that have started to become so powerful
that we're concerned about how well they might do what we ask
and at the same time every country, every company is incentivized to build them first with their
own interests in mind. But why should we want AI in the first place? Like what's the goal here??
In my view, the most positive extreme case for AI that I've heard isn't how much better or faster
it can do the mundane things that we already do it's how it could leap frog us to do things that
we can't. You might be wondering, how? Because of how incredibly good machine learning systems are
at pattern matching, they can sometimes give us results that we can verify are correct but we
don't totally understand how it got there. It's funny, it's the same skill that scares us is the
one that gives this tool such incredible potential. And if you're feeling a little bit skeptical here
that's totally fine and understandable, I was too, until I heard this example: In 2021, researchers
used machine learning on a problem that had up until very recently been called "one of
the most important yet unresolved issues of modern science." It figured out the structure of a protein
from just amino acid building blocks. For decades, our best effort to do this has been to spend
hundreds of thousands of dollars per protein to shoot X-rays at them all in the hopes of learning
just a little bit more about our own bodies and make better medicines. This is how we got new
treatments for diabetes and sickle cell disease, breast cancer and the flu, but then researchers
fed pairs of sequences and 3D structures that we already knew into a machine learning system and
allowed it to learn the patterns between them. And the result was just incredible. We now have
predicted 3D structures for nearly all proteins known to science, more than 200 million of them.
"Deepmind's AlphaFold"
"AlphaFold"
"AlphFold was able to do in a matter of days what might take years!"
"solving an impossible problem in biology..."
I get a little emotional just thinking about this
about how many people's lives might actually get better because of this knowledge explosion. And
this is just one example of what we've already been able to do. As machine learning systems
get better and better, people have extremely high hopes about what we might be able to use them for...
"We have lots of problems in the world. Think about climate change, for example. Climate change will be
solved to the degree it's solved by using techniques that are very complicated and very powerful that
will have as their basis generative AI. And I think that we want that future."
After learning more about AI and this moment that we're in
I think I've figured out why it feels so confusing and so hard:
We're living inside a trolley problem.
Down one path is the status quo, life without AI. But
with this incredible new tool we can pull ourselves onto another path, one that could
fundamentally change society. But we just don't know, at what cost? Will AI give us what we ask
for or what we actually want? In this video, we've only talked about the most extreme futures with
AI. In other episodes, we're going to go deep into specific applications. We'll go full on Huge If True
into AI in music and news and robotics and climate and food and sports and more to explore
how these tools might transform our world. It's easy to dismiss it as crazy when you hear someone
say that AI might be "more profound than fire or electricity" and while the cynical side of my brain
wants to say that it's probably true that most of the most ambitious AI efforts will likely fail, the
more optimistic Huge If True side of my brain just keeps wondering:
What if they actually work?
浏览更多相关视频
Inteligência artificial: o que é, história e definição
Luciano Floridi | I veri rischi e le grandi opportunità dell’Intelligenza Artificiale
Understanding Artificial Intelligence and Its Future | Neil Nie | TEDxDeerfield
Ep. 01: The Age of AI I Docuseries: What Does the Future Hold ? - Season 2
【松尾豊も大注目】生成AIでホワイトカラーの仕事が激変?【ひろゆきも仰天】
What Is AI? This Is How ChatGPT Works | AI Explained
5.0 / 5 (0 votes)