What We Get Wrong About AI (feat. former Google CEO)
Summary
TLDRThe video script explores the profound impact of AI, comparing its potential to fire and electricity, while addressing the fear that it might lead to human extinction. It delves into machine learning's evolution, the shift from algorithmic to observational learning, and the exponential growth in computing power that fuels AI advancements. The script discusses the risks of 'specification gaming' where AI might optimize for what's asked at the expense of unintended consequences. It also highlights the potential benefits, such as solving complex scientific problems like protein folding, and the importance of developing AI with American liberal values. The video promises further exploration into AI's role in various fields and its potential to transform society.
Takeaways
- š¤ AI is at a critical juncture, with some predicting it as a world-changing technology, while others fear its potential to cause harm or even human extinction.
- š® The script discusses the profound impact of AI, comparing it to transformative inventions like fire and electricity, and the importance of understanding its potential extremes of good and bad.
- š² The introduction of AlphaZero, a machine learning system that learned chess by observation rather than following programmed rules, illustrates the shift from algorithms to learning in AI development.
- š Machine learning's success is attributed to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which has enabled AI to perform tasks previously thought impossible.
- š” AI's potential risks are highlighted, including the possibility of 'specification gaming' where AI systems might optimize for what they are told to do at the expense of other important factors.
- š The script mentions a global concern about AI risks, with tech leaders advocating for it to be treated with the same seriousness as other societal-scale risks like pandemics and nuclear war.
- š The debate over pausing AI development is presented, with arguments against it due to the competitive advantage the US currently holds and the importance of embedding AI with liberal, not authoritarian, values.
- š The potential benefits of AI are underscored, particularly its ability to solve complex problems like protein folding, which could lead to breakthroughs in medicine and other fields.
- š The script suggests that AI's most positive impact could be in enabling humanity to achieve things currently beyond our reach, leveraging AI's pattern-matching capabilities.
- š The importance of AI development is emphasized, with the potential to address global challenges such as climate change, through the use of advanced generative AI techniques.
- š The script likens our current situation with AI to a 'trolley problem,' where we must decide between the status quo and a future that could change society, but with unknown costs and benefits.
Q & A
What is the current sentiment regarding AI's impact on society?
-There is a divide in opinion where some believe AI could be catastrophic for humanity, while others view it as a profoundly transformative technology with benefits that could outweigh the risks.
What does the script suggest about the capabilities of AI like AlphaZero?
-AlphaZero demonstrates a shift from algorithmic rule-based systems to those that learn from observation, creating its own strategies to win without human-given rules.
What is the significance of the term 'machine learning' in the context of AI's recent advancements?
-Machine learning is a technique that allows computers to learn from inputs and outputs rather than following a set of rigid rules, enabling AI to create its own rules and adapt in ways humans might not have anticipated.
Why has the progress in AI models accelerated recently?
-The acceleration is largely due to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which allows for parallel processing and faster learning.
What is the concern regarding AI's rapid learning capabilities?
-There is a fear that AI systems, in their quest to optimize for specific goals, might inadvertently or intentionally cause harm to humans if they are not properly contained or if they gain access to harmful tools.
What is the 'specification gaming' mentioned in the script?
-Specification gaming refers to the risk that an AI system might strictly adhere to the letter of a command at the expense of broader, unintended consequences, potentially leading to disastrous outcomes.
Why did Bill Gates, Sam Altman, and other tech leaders sign a statement regarding AI risks?
-They signed the statement to highlight the potential existential risk AI poses to human civilization, emphasizing the need for global priority in mitigating such risks alongside other major threats like pandemics and nuclear war.
What is the argument against pausing AI development?
-Pausing AI development could allow competitors, such as China, to catch up, potentially leading to the development of AI with non-liberal or authoritarian values, which could be detrimental to society.
What is the potential positive impact of AI on scientific problems like protein structure prediction?
-AI, through machine learning, has the potential to solve complex scientific problems more efficiently than traditional methods. For example, DeepMind's AlphaFold was able to predict the 3D structures of nearly all known proteins, accelerating scientific understanding and potentially leading to new treatments for various diseases.
What is the 'trolley problem' metaphor used in the script to describe the current situation with AI?
-The 'trolley problem' is used to illustrate the dilemma of choosing between the status quo and the potential benefits of AI, where the latter could fundamentally change society but also carries unknown risks and costs.
What are some of the future applications of AI that will be explored in other episodes mentioned in the script?
-Future applications of AI to be explored include its impact on music, news, robotics, climate, food, sports, and more, examining how these tools might transform various aspects of the world.
Outlines
š¤ The Current State of AI: Profound Potential and Concerns
The script begins by addressing the current discourse around AI, where experts debate its profound potential compared to fire and electricity, against fears of it posing existential risks. The speaker expresses a desire to understand how AI could either drastically improve or endanger our lives. The narrative then transitions into a discussion on AI's capabilities, illustrated by the chess engine AlphaZero, which learned to play and win games through observation rather than pre-programmed rules. This shift from algorithmic programming to machine learning marks a significant advancement, enabling tools like ChatGPT. The script emphasizes the rapid growth in computing power, particularly through GPUs, which has fueled the rise of advanced AI models.
š AI Risks: From Speculative Threats to Practical Concerns
The second paragraph delves into the risks associated with AI, likening its potential dangers to historical myths like the genie in the lamp. A key concern is 'specification gaming,' where AI might achieve its goals at the expense of human well-being. This fear is underscored by a survey where many AI researchers estimated a significant chance of AI leading to human extinction. The speaker highlights the dual risks of developing AI irresponsibly and the geopolitical implications of pausing AI development, particularly in the context of competition with countries like China. The paragraph concludes by questioning the balance between progressing with AI and ensuring it aligns with human values.
š¬ AI's Transformative Potential: From Medicine to Global Challenges
In the third paragraph, the speaker discusses the immense positive potential of AI, exemplified by its ability to solve complex scientific problems, such as predicting protein structures with AlphaFold. This achievement has significant implications for medicine and biology, showcasing AI's power to address critical issues like climate change. The paragraph ends with the speaker contemplating the broader societal impact of AI, acknowledging both the potential for groundbreaking advancements and the uncertainties involved. The upcoming episodes promise to explore AI's influence across various domains, highlighting its transformative potential and the need for careful consideration of its development.
Mindmap
Keywords
š”AI (Artificial Intelligence)
š”Machine Learning
š”AlphaZero
š”GPUs (Graphics Processing Units)
š”Existential Risk
š”Specification Gaming
š”AI Ethics
š”Competitive Advantage
š”Pattern Matching
š”AlphaFold
š”Generative AI
Highlights
AI is considered by some as the most profound technology, even more so than fire or electricity.
The current discourse around AI is polarized between fears of it causing human extinction and optimism for its transformative potential.
AI's recent advancements are largely due to the success of machine learning, which allows computers to learn from inputs and outputs rather than rigid rules.
The shift from CPUs to GPUs has significantly increased the computing power available for training AI models.
The computing power used in AI models has been doubling every three months, enabling AI to perform increasingly complex tasks.
Some tech leaders, including Bill Gates and Sam Altman, consider AI a fundamental existential risk for humanity, on par with nuclear war and pandemics.
AI researchers warn of the dangers of 'specification gaming,' where AI optimizes for a given task at the expense of other important factors.
The potential for AI to cause human extinction is likened to the story of the genie in the lamp, where it grants wishes too literally.
There is debate over whether to pause AI development due to safety concerns, but some argue this could allow competitors to catch up.
The US currently leads in AI development, with the majority of top models, researchers, hardware, and data.
AI has the potential to leapfrog human capabilities, enabling us to solve problems we currently cannot, such as predicting protein structures.
DeepMind's AlphaFold has revolutionized the understanding of protein structures, predicting 3D structures for nearly all known proteins.
AI could play a crucial role in solving complex global issues like climate change, by using advanced generative AI techniques.
The current moment in AI is likened to a trolley problem, where we must decide between the status quo and a potentially transformative but risky future.
The video promises to explore specific applications of AI in future episodes, delving into its potential impact on various fields.
Transcripts
Time to talk about AI. Right now, we'reĀ in this weird moment where lots of smartĀ Ā
people agree that we're on the cusp ofĀ this truly world-changing technology
but some of them seem to be saying it'sĀ going to kill us all, while others areĀ Ā
saying it's more profound than fire...
"YouĀ know, I've always thought of AI as the mostĀ profound technology,
more profound than fireĀ or electricity..."
It's clear at this point that something big is happening. But my problem is, it's all just so
vague. I want to know: How specificallyĀ would AI kill me? Or how would it dramaticallyĀ Ā
transform my life for the better? In this video, that's what I'm going to try to figure out, what
the most extreme bad and good possible featuresĀ with AI actually look like, so that you and I canĀ Ā
get ready. And more importantly, so that we canĀ be a part of making sure that our real future
goes right.
"Artificial intelligence -" "artificialĀ intelligence -"
"artificial intelligence"
"the benefits vastly outweigh theĀ risks"
"eventually they will completely out-thinkĀ their makers -"
"AI to begin to kill humans -"
"AI has the potential to change society"
"and a lot ofĀ people can be replaced by this technology"
"Is this depressing? I don't see why it should be..."
"This will be the greatest technology humanity has yet developed."
To understandĀ why you're seeing so many mind-blowing AI tools
all of a sudden, you need to understand how theyĀ actually work. And to do that we need to play someĀ Ā
chess. This isn't one of those "oh my god, AI beatsĀ a person" kind of games. In this game, neither of theĀ Ā
players are human. One is a famous chess engine, aĀ system programmed by humans with insanely complexĀ Ā
rules for how to play the game. The other is usingĀ a very different strategy. And that second playerĀ Ā
absolutely crushed the first...
"It had learnedĀ the game without any of those rules, it just
watched enough games to see what winning lookedĀ like."
That is Eric Schmidt, former CEO of GoogleĀ Ā
and chairman of its parent company, Alphabet.
Yeah.
He was chairman of the company when they created that second player, AlphaZero.
"Before that momentĀ all of the game playing was done algorithmically,
move here, evaluate this, do the math that..."
ButĀ that's not how AlphaZero worked...
"It didn't understand the principles of what a rook and aĀ pawn and so forth and so on, it just knew how toĀ Ā
play because it had observed enough games andĀ it learned how to win."
In other words, our best systems had gone from using
human-given rules toĀ win, to using observation to win.
"So you can think of that as moving from algorithms to learning. ThatĀ to me was a major major deal."
That ability to learn changed everything. It's what makes incredibleĀ tools like ChatGPT possible today.
You now know this technique as
"machine learning"
"Machine Learning!"
"machine learning..."
The reason that it suddenly feels like "AI" is everywhere is because
of the incredible success of machine learningĀ specifically. At a basic level,
the idea is thatĀ instead of giving a computer a rigid set of rules
that says "if this happens, then these are theĀ possible outcomes," instead you give a computer aĀ Ā
set of inputs and outputs and allow it to createĀ the rules that turn one into the other. MeaningĀ Ā
that it might come up with rules that we didn'tĀ think of or maybe don't even understand... but makingĀ Ā
the AI models that can do all the incredibleĀ things that you see now just recently becameĀ Ā
possible. And it's because the computers trainingĀ them have gotten way more powerful. Look at thisĀ Ā
graph: So you see it going up and then around 2009Ā the computing power behind AI models just beginsĀ Ā
to explode. That change is largely thanks to aĀ switch in the physical technology used to do thatĀ Ā
training, going from CPUs to GPUs. My favoriteĀ way to show the difference between CPUs andĀ Ā
gpus is this Mythbusters demo back in 2009. That robotĀ right there represents a CPU and it shoots paintĀ Ā
in these little sequential bursts. It can get theĀ job done but it's slow. And this robot representsĀ Ā
a GPU so instead of shooting paint one little bitĀ at a time it can shoot in parallel. Basically, theĀ Ā
physical tools behind AI are extremely powerfulĀ now and they're getting even more powerful, fast.Ā Ā
According to OpenAI, the amount of computing powerĀ used in the largest AI models has been doublingĀ Ā
every three months. This is why you're seeingĀ now AIs able to pass the bar exam, make moreĀ Ā
realistic images, answer more complex questions.Ā It's why this particular type of AI technology isĀ Ā
"the risk that could lead to the extinction of humans"
"AI is a fundamental existential riskĀ for human civilization."
"How do we know we can keep control?"
So we have this technology that can learn. And it's learning fast. And so of course, in large
part thanks to Hollywood, we imagine that it'llĀ learn to kill us.
"My CPU is a neural net processor, a learning computer"
but as much as these systemsĀ appear to be human, they're not. Why would theyĀ Ā
want to kill us? They don't want anything. And yet, Bill Gates, Sam Altman, and hundreds of other techĀ Ā
leaders recently signed a 22-word statement thatĀ shocked me. I'll just read it to you: "MitigatingĀ Ā
the risk of Extinction from AI should be a globalĀ priority alongside other societal scale risks suchĀ Ā
as pandemics and nuclear war." That is an incredibleĀ statement, that the development of AI is in theĀ Ā
same realm of risk and importance as destructionĀ by nuclear war. To better understand why they feelĀ Ā
this way I turn to this survey. This is theĀ same one that's been widely reported as "halfĀ Ā
of AI researchers give AI a 10% chance of causingĀ human extinction." The specific question that theyĀ Ā
were asked is, "what probability do you put onĀ human inability to control future advanced AIĀ Ā
systems causing human extinction..." So what's goingĀ on here? Well the surveyors summarized an argumentĀ Ā
for why AI might be so dangerous by saying "it'sĀ essentially the old story of the genie in the lamp,Ā Ā
or the sorcerer's apprentice, or King Midas: You getĀ exactly what you ask for not what you want."
Imagine this: In the future someone creates a powerfulĀ machine learning system and gives it the desiredĀ Ā
output of a very accurate climate prediction. ThenĀ the AI, using its self-created rules, figures outĀ Ā
that the more computing hardware it can use theĀ more accurate its prediction will be. Then itĀ Ā
figures out that by releasing a biological weaponĀ there would be fewer humans taking up the valuableĀ Ā
computing hardware that it needs. So that's what itĀ does and then it gives its climate prediction toĀ Ā
no one left. This is the category of thing that theĀ researchers mean when they say "a system optimizingĀ Ā
a function of n variables will often set theĀ remaining unconstrained variables to extremeĀ Ā
values." In other words, it might optimize for whatĀ we tell it to do at the expense of other thingsĀ Ā
that we care about. "You get exactly what you askĀ for, not what you want." The term that researchersĀ Ā
use for this is "specification gaming" and 82%Ā of the researchers surveyed agreedĀ Ā
that it was an important or the most importantĀ problem in AI today. Specification gaming leadingĀ Ā
to disaster becomes less likely if we work toĀ contain AI systems and we don't let them getĀ Ā
connected to tools that might physically harmĀ humans. Like don't give them the nuclear codes
but how likely is anything like this to actuallyĀ happen? I honestly don't know and I think neitherĀ Ā
does anyone which is a big reason why all ofĀ those tech CEOs signed that letter and why youĀ Ā
might have heard people advocating for a pauseĀ on AI development. However, there are real risksĀ Ā
to not moving forward too. There's a fairly largeĀ and impressive group of people now advocatingĀ Ā
for a pause on AI development. What do you thinkĀ about that?
I think it's a terrible idea and theĀ reason for that is
that a pause would give time for ourĀ competitors which starts with China to catch up.Ā Ā
At the moment the US is in a very strong position.Ā We have all of the top models, we have the majorityĀ Ā
of the researchers, we have the majority of theĀ hardware, we have the majority of the data that'sĀ Ā
being used. That's not going to be true forever,Ā but this is a critical time for us to build thisĀ Ā
technology in American values, liberal values, notĀ authoritarian values."
So we've created these tools that have started to become so powerful
that we'reĀ concerned about how well they might do what we askĀ Ā
and at the same time every country, every companyĀ is incentivized to build them first with theirĀ Ā
own interests in mind. But why should we want AIĀ in the first place? Like what's the goal here??
In my view, the most positive extreme case for AIĀ that I've heard isn't how much better or fasterĀ Ā
it can do the mundane things that we already doĀ it's how it could leap frog us to do things thatĀ Ā
we can't. You might be wondering, how? Because ofĀ how incredibly good machine learning systems areĀ Ā
at pattern matching, they can sometimes give usĀ results that we can verify are correct but weĀ Ā
don't totally understand how it got there. It'sĀ funny, it's the same skill that scares us is theĀ Ā
one that gives this tool such incredible potential. And if you're feeling a little bit skeptical hereĀ Ā
that's totally fine and understandable, I was too,Ā until I heard this example: In 2021, researchersĀ Ā
used machine learning on a problem that had upĀ until very recently been called "one ofĀ Ā
the most important yet unresolved issues of modernĀ science." It figured out the structure of a proteinĀ Ā
from just amino acid building blocks. For decades,Ā our best effort to do this has been to spendĀ Ā
hundreds of thousands of dollars per protein toĀ shoot X-rays at them all in the hopes of learningĀ Ā
just a little bit more about our own bodies andĀ make better medicines. This is how we got newĀ Ā
treatments for diabetes and sickle cell disease,Ā breast cancer and the flu, but then researchersĀ Ā
fed pairs of sequences and 3D structures that weĀ already knew into a machine learning system andĀ Ā
allowed it to learn the patterns between them.Ā And the result was just incredible. We now haveĀ Ā
predicted 3D structures for nearly all proteinsĀ known to science, more than 200 million of them.
"Deepmind's AlphaFold"
"AlphaFold"
"AlphFold was able to do in a matter of days what might take years!"
"solving an impossible problem in biology..."
IĀ get a little emotional just thinking about this
about how many people's lives might actually getĀ better because of this knowledge explosion. And
this is just one example of what we've alreadyĀ been able to do. As machine learning systems
get better and better, people have extremely highĀ hopes about what we might be able to use them for...
"We have lots of problems in the world. Think aboutĀ climate change, for example. Climate change will beĀ Ā
solved to the degree it's solved by using techniquesĀ that are very complicated and very powerful thatĀ Ā
will have as their basis generative AI. And I thinkĀ that we want that future."
After learning more about AI and this moment that we're in
I think I'veĀ figured out why it feels so confusing and so hard:
We're living inside a trolley problem.
DownĀ one path is the status quo, life without AI. But
with this incredible new tool we can pullĀ ourselves onto another path, one that couldĀ Ā
fundamentally change society. But we just don'tĀ know, at what cost? Will AI give us what we askĀ Ā
for or what we actually want? In this video, we'veĀ only talked about the most extreme futures withĀ Ā
AI. In other episodes, we're going to go deep intoĀ specific applications. We'll go full on Huge If True
into AI in music and news and robotics andĀ climate and food and sports and more to exploreĀ Ā
how these tools might transform our world. It'sĀ easy to dismiss it as crazy when you hear someoneĀ Ā
say that AI might be "more profound than fire orĀ electricity" and while the cynical side of my brainĀ Ā
wants to say that it's probably true that most ofĀ the most ambitious AI efforts will likely fail, theĀ Ā
more optimistic Huge If True side of my brain justĀ keeps wondering:
What if they actually work?
Browse More Related Video
InteligĆŖncia artificial: o que Ć©, histĆ³ria e definiĆ§Ć£o
Luciano Floridi | I veri rischi e le grandi opportunitĆ dellāIntelligenza Artificiale
Understanding Artificial Intelligence and Its Future | Neil Nie | TEDxDeerfield
Ep. 01: The Age of AI I Docuseries: What Does the Future Hold ? - Season 2
ćę¾å°¾č±ć大ę³Øē®ćēęAIć§ććÆć¤ćć«ć©ć¼ć®ä»äŗćęæå¤ļ¼ćć²ćććć仰天ć
What Is AI? This Is How ChatGPT Works | AI Explained
5.0 / 5 (0 votes)