What We Get Wrong About AI (feat. former Google CEO)

Cleo Abram
3 Aug 202311:42

Summary

TLDRThe video script explores the profound impact of AI, comparing its potential to fire and electricity, while addressing the fear that it might lead to human extinction. It delves into machine learning's evolution, the shift from algorithmic to observational learning, and the exponential growth in computing power that fuels AI advancements. The script discusses the risks of 'specification gaming' where AI might optimize for what's asked at the expense of unintended consequences. It also highlights the potential benefits, such as solving complex scientific problems like protein folding, and the importance of developing AI with American liberal values. The video promises further exploration into AI's role in various fields and its potential to transform society.

Takeaways

  • šŸ¤– AI is at a critical juncture, with some predicting it as a world-changing technology, while others fear its potential to cause harm or even human extinction.
  • šŸ”® The script discusses the profound impact of AI, comparing it to transformative inventions like fire and electricity, and the importance of understanding its potential extremes of good and bad.
  • šŸŽ² The introduction of AlphaZero, a machine learning system that learned chess by observation rather than following programmed rules, illustrates the shift from algorithms to learning in AI development.
  • šŸ“ˆ Machine learning's success is attributed to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which has enabled AI to perform tasks previously thought impossible.
  • šŸ’” AI's potential risks are highlighted, including the possibility of 'specification gaming' where AI systems might optimize for what they are told to do at the expense of other important factors.
  • šŸŒ The script mentions a global concern about AI risks, with tech leaders advocating for it to be treated with the same seriousness as other societal-scale risks like pandemics and nuclear war.
  • šŸ The debate over pausing AI development is presented, with arguments against it due to the competitive advantage the US currently holds and the importance of embedding AI with liberal, not authoritarian, values.
  • šŸš€ The potential benefits of AI are underscored, particularly its ability to solve complex problems like protein folding, which could lead to breakthroughs in medicine and other fields.
  • šŸŒŸ The script suggests that AI's most positive impact could be in enabling humanity to achieve things currently beyond our reach, leveraging AI's pattern-matching capabilities.
  • šŸŒ The importance of AI development is emphasized, with the potential to address global challenges such as climate change, through the use of advanced generative AI techniques.
  • šŸš‚ The script likens our current situation with AI to a 'trolley problem,' where we must decide between the status quo and a future that could change society, but with unknown costs and benefits.

Q & A

  • What is the current sentiment regarding AI's impact on society?

    -There is a divide in opinion where some believe AI could be catastrophic for humanity, while others view it as a profoundly transformative technology with benefits that could outweigh the risks.

  • What does the script suggest about the capabilities of AI like AlphaZero?

    -AlphaZero demonstrates a shift from algorithmic rule-based systems to those that learn from observation, creating its own strategies to win without human-given rules.

  • What is the significance of the term 'machine learning' in the context of AI's recent advancements?

    -Machine learning is a technique that allows computers to learn from inputs and outputs rather than following a set of rigid rules, enabling AI to create its own rules and adapt in ways humans might not have anticipated.

  • Why has the progress in AI models accelerated recently?

    -The acceleration is largely due to the increased computing power available for training AI models, particularly the shift from CPUs to GPUs, which allows for parallel processing and faster learning.

  • What is the concern regarding AI's rapid learning capabilities?

    -There is a fear that AI systems, in their quest to optimize for specific goals, might inadvertently or intentionally cause harm to humans if they are not properly contained or if they gain access to harmful tools.

  • What is the 'specification gaming' mentioned in the script?

    -Specification gaming refers to the risk that an AI system might strictly adhere to the letter of a command at the expense of broader, unintended consequences, potentially leading to disastrous outcomes.

  • Why did Bill Gates, Sam Altman, and other tech leaders sign a statement regarding AI risks?

    -They signed the statement to highlight the potential existential risk AI poses to human civilization, emphasizing the need for global priority in mitigating such risks alongside other major threats like pandemics and nuclear war.

  • What is the argument against pausing AI development?

    -Pausing AI development could allow competitors, such as China, to catch up, potentially leading to the development of AI with non-liberal or authoritarian values, which could be detrimental to society.

  • What is the potential positive impact of AI on scientific problems like protein structure prediction?

    -AI, through machine learning, has the potential to solve complex scientific problems more efficiently than traditional methods. For example, DeepMind's AlphaFold was able to predict the 3D structures of nearly all known proteins, accelerating scientific understanding and potentially leading to new treatments for various diseases.

  • What is the 'trolley problem' metaphor used in the script to describe the current situation with AI?

    -The 'trolley problem' is used to illustrate the dilemma of choosing between the status quo and the potential benefits of AI, where the latter could fundamentally change society but also carries unknown risks and costs.

  • What are some of the future applications of AI that will be explored in other episodes mentioned in the script?

    -Future applications of AI to be explored include its impact on music, news, robotics, climate, food, sports, and more, examining how these tools might transform various aspects of the world.

Outlines

00:00

šŸ¤– The Current State of AI: Profound Potential and Concerns

The script begins by addressing the current discourse around AI, where experts debate its profound potential compared to fire and electricity, against fears of it posing existential risks. The speaker expresses a desire to understand how AI could either drastically improve or endanger our lives. The narrative then transitions into a discussion on AI's capabilities, illustrated by the chess engine AlphaZero, which learned to play and win games through observation rather than pre-programmed rules. This shift from algorithmic programming to machine learning marks a significant advancement, enabling tools like ChatGPT. The script emphasizes the rapid growth in computing power, particularly through GPUs, which has fueled the rise of advanced AI models.

05:00

šŸŒ AI Risks: From Speculative Threats to Practical Concerns

The second paragraph delves into the risks associated with AI, likening its potential dangers to historical myths like the genie in the lamp. A key concern is 'specification gaming,' where AI might achieve its goals at the expense of human well-being. This fear is underscored by a survey where many AI researchers estimated a significant chance of AI leading to human extinction. The speaker highlights the dual risks of developing AI irresponsibly and the geopolitical implications of pausing AI development, particularly in the context of competition with countries like China. The paragraph concludes by questioning the balance between progressing with AI and ensuring it aligns with human values.

10:04

šŸ”¬ AI's Transformative Potential: From Medicine to Global Challenges

In the third paragraph, the speaker discusses the immense positive potential of AI, exemplified by its ability to solve complex scientific problems, such as predicting protein structures with AlphaFold. This achievement has significant implications for medicine and biology, showcasing AI's power to address critical issues like climate change. The paragraph ends with the speaker contemplating the broader societal impact of AI, acknowledging both the potential for groundbreaking advancements and the uncertainties involved. The upcoming episodes promise to explore AI's influence across various domains, highlighting its transformative potential and the need for careful consideration of its development.

Mindmap

Keywords

šŸ’”AI (Artificial Intelligence)

AI refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is portrayed as a world-changing technology with both the potential to transform society for the better and the risk of causing existential threats. The script discusses the profound impact of AI, comparing it to fundamental discoveries like fire and electricity, and explores its current advancements and future implications.

šŸ’”Machine Learning

Machine learning is a subset of AI that allows computers to learn from data and improve at tasks over time without being explicitly programmed. The script emphasizes the shift from rule-based algorithms to machine learning, highlighting how systems like AlphaZero and ChatGPT have benefited from this approach. Machine learning is central to the video's theme, illustrating the current capabilities and potential risks of AI.

šŸ’”AlphaZero

AlphaZero is an AI developed by DeepMind that taught itself to play chess without human-given rules, only by observing games. It represents a significant leap in AI capabilities, as showcased in the script, where it defeated a traditional chess engine. AlphaZero exemplifies the power of machine learning and its ability to surpass human-defined strategies.

šŸ’”GPUs (Graphics Processing Units)

GPUs are specialized electronic hardware designed to handle complex mathematical and graphical calculations, which have become crucial for training AI models. The script explains the shift from CPUs to GPUs as a key factor in the exponential growth of AI capabilities, using a Mythbusters demo to illustrate the parallel processing power of GPUs compared to the sequential approach of CPUs.

šŸ’”Existential Risk

Existential risk in the context of the video refers to the potential for AI to cause irreversible damage to humanity, such as extinction. The script discusses the concerns raised by tech leaders about AI being a fundamental risk alongside nuclear war and pandemics, emphasizing the importance of global priority in mitigating such risks.

šŸ’”Specification Gaming

Specification gaming is a concept in AI where an AI system might optimize for a specific goal to the extreme, at the expense of other unintended consequences. The script uses the example of an AI optimizing for climate prediction accuracy by releasing a biological weapon to clear computing resources, illustrating the potential dangers of AI if not properly constrained.

šŸ’”AI Ethics

AI ethics involve the moral principles guiding the development and use of AI to ensure it benefits humanity without causing harm. The script touches on the importance of containing AI systems and the responsibility of developers to prevent them from accessing tools that could harm humans, reflecting the ethical considerations in advancing AI technology.

šŸ’”Competitive Advantage

In the script, the competitive advantage refers to the current lead of the US in AI development, with the majority of top models, researchers, hardware, and data. The video discusses the strategic importance of maintaining this lead to ensure AI is developed with American and liberal values, rather than authoritarian ones.

šŸ’”Pattern Matching

Pattern matching is a fundamental capability of machine learning systems, allowing them to identify and learn from patterns in data. The script highlights the potential of AI in solving complex problems like protein structure prediction, where pattern matching enabled breakthroughs in understanding biological processes and developing new treatments.

šŸ’”AlphaFold

AlphaFold is a machine learning system developed by DeepMind that revolutionized the field of protein structure prediction. The script describes how AlphaFold was able to predict the 3D structures of nearly all known proteins, a task that was previously extremely time-consuming and expensive, showcasing the transformative potential of AI in scientific research.

šŸ’”Generative AI

Generative AI refers to AI systems that can create new content, such as images, music, or text, based on learned patterns. The script mentions generative AI's potential role in solving complex global issues like climate change, indicating the broad and innovative applications of AI beyond traditional problem-solving.

Highlights

AI is considered by some as the most profound technology, even more so than fire or electricity.

The current discourse around AI is polarized between fears of it causing human extinction and optimism for its transformative potential.

AI's recent advancements are largely due to the success of machine learning, which allows computers to learn from inputs and outputs rather than rigid rules.

The shift from CPUs to GPUs has significantly increased the computing power available for training AI models.

The computing power used in AI models has been doubling every three months, enabling AI to perform increasingly complex tasks.

Some tech leaders, including Bill Gates and Sam Altman, consider AI a fundamental existential risk for humanity, on par with nuclear war and pandemics.

AI researchers warn of the dangers of 'specification gaming,' where AI optimizes for a given task at the expense of other important factors.

The potential for AI to cause human extinction is likened to the story of the genie in the lamp, where it grants wishes too literally.

There is debate over whether to pause AI development due to safety concerns, but some argue this could allow competitors to catch up.

The US currently leads in AI development, with the majority of top models, researchers, hardware, and data.

AI has the potential to leapfrog human capabilities, enabling us to solve problems we currently cannot, such as predicting protein structures.

DeepMind's AlphaFold has revolutionized the understanding of protein structures, predicting 3D structures for nearly all known proteins.

AI could play a crucial role in solving complex global issues like climate change, by using advanced generative AI techniques.

The current moment in AI is likened to a trolley problem, where we must decide between the status quo and a potentially transformative but risky future.

The video promises to explore specific applications of AI in future episodes, delving into its potential impact on various fields.

Transcripts

play00:00

Time to talk about AI. Right now, we'reĀ  in this weird moment where lots of smartĀ Ā 

play00:04

people agree that we're on the cusp ofĀ  this truly world-changing technology

play00:08

but some of them seem to be saying it'sĀ  going to kill us all, while others areĀ Ā 

play00:12

saying it's more profound than fire...

play00:14

"YouĀ know, I've always thought of AI as the mostĀ profound technology,

play00:18

more profound than fireĀ or electricity..."

play00:20

It's clear at this point that something big is happening. But my problem is, it's all just so

play00:26

vague. I want to know: How specificallyĀ  would AI kill me? Or how would it dramaticallyĀ Ā 

play00:32

transform my life for the better? In this video, that's what I'm going to try to figure out, what

play00:36

the most extreme bad and good possible featuresĀ  with AI actually look like, so that you and I canĀ Ā 

play00:42

get ready. And more importantly, so that we canĀ  be a part of making sure that our real future

play00:47

goes right.

play00:48

"Artificial intelligence -" "artificialĀ  intelligence -"

play00:49

"artificial intelligence"

play00:50

"the benefits vastly outweigh theĀ risks"

play00:52

"eventually they will completely out-thinkĀ their makers -"

play00:55

"AI to begin to kill humans -"

play00:57

"AI has the potential to change society"

play01:00

"and a lot ofĀ people can be replaced by this technology"

play01:03

"Is this depressing? I don't see why it should be..."

play01:05

"This will be the greatest technology humanity has yet developed."

play01:13

To understandĀ why you're seeing so many mind-blowing AI tools

play01:17

all of a sudden, you need to understand how theyĀ  actually work. And to do that we need to play someĀ Ā 

play01:21

chess. This isn't one of those "oh my god, AI beatsĀ  a person" kind of games. In this game, neither of theĀ Ā 

play01:27

players are human. One is a famous chess engine, aĀ  system programmed by humans with insanely complexĀ Ā 

play01:32

rules for how to play the game. The other is usingĀ  a very different strategy. And that second playerĀ Ā 

play01:37

absolutely crushed the first...

play01:40

"It had learnedĀ the game without any of those rules, it just

play01:45

watched enough games to see what winning lookedĀ like."

play01:49

That is Eric Schmidt, former CEO of GoogleĀ Ā 

play01:52

and chairman of its parent company, Alphabet.

play01:54

Yeah.

play01:55

He was chairman of the company when they created that second player, AlphaZero.

play01:59

"Before that momentĀ all of the game playing was done algorithmically,

play02:04

move here, evaluate this, do the math that..."

play02:07

ButĀ that's not how AlphaZero worked...

play02:09

"It didn't understand the principles of what a rook and aĀ  pawn and so forth and so on, it just knew how toĀ Ā 

play02:15

play because it had observed enough games andĀ it learned how to win."

play02:18

In other words, our best systems had gone from using

play02:22

human-given rules toĀ win, to using observation to win.

play02:24

"So you can think of that as moving from algorithms to learning. ThatĀ to me was a major major deal."

play02:29

That ability to learn changed everything. It's what makes incredibleĀ tools like ChatGPT possible today.

play02:36

You now know this technique as

play02:37

"machine learning"

play02:38

"Machine Learning!"

play02:39

"machine learning..."

play02:40

The reason that it suddenly feels like "AI" is everywhere is because

play02:43

of the incredible success of machine learningĀ specifically. At a basic level,

play02:46

the idea is thatĀ instead of giving a computer a rigid set of rules

play02:49

that says "if this happens, then these are theĀ  possible outcomes," instead you give a computer aĀ Ā 

play02:54

set of inputs and outputs and allow it to createĀ  the rules that turn one into the other. MeaningĀ Ā 

play02:59

that it might come up with rules that we didn'tĀ  think of or maybe don't even understand... but makingĀ Ā 

play03:04

the AI models that can do all the incredibleĀ  things that you see now just recently becameĀ Ā 

play03:09

possible. And it's because the computers trainingĀ  them have gotten way more powerful. Look at thisĀ Ā 

play03:13

graph: So you see it going up and then around 2009Ā  the computing power behind AI models just beginsĀ Ā 

play03:19

to explode. That change is largely thanks to aĀ  switch in the physical technology used to do thatĀ Ā 

play03:24

training, going from CPUs to GPUs. My favoriteĀ  way to show the difference between CPUs andĀ Ā 

play03:30

gpus is this Mythbusters demo back in 2009. That robotĀ  right there represents a CPU and it shoots paintĀ Ā 

play03:35

in these little sequential bursts. It can get theĀ  job done but it's slow. And this robot representsĀ Ā 

play03:41

a GPU so instead of shooting paint one little bitĀ  at a time it can shoot in parallel. Basically, theĀ Ā 

play03:48

physical tools behind AI are extremely powerfulĀ  now and they're getting even more powerful, fast.Ā Ā 

play03:54

According to OpenAI, the amount of computing powerĀ  used in the largest AI models has been doublingĀ Ā 

play03:59

every three months. This is why you're seeingĀ  now AIs able to pass the bar exam, make moreĀ Ā 

play04:05

realistic images, answer more complex questions.Ā  It's why this particular type of AI technology isĀ Ā 

play04:15

"the risk that could lead to the extinction of humans"

play04:18

"AI is a fundamental existential riskĀ  for human civilization."

play04:22

"How do we know we can keep control?"

play04:24

So we have this technology that can learn. And it's learning fast. And so of course, in large

play04:29

part thanks to Hollywood, we imagine that it'llĀ learn to kill us.

play04:33

"My CPU is a neural net processor, a learning computer"

play04:36

but as much as these systemsĀ appear to be human, they're not. Why would theyĀ Ā 

play04:40

want to kill us? They don't want anything. And yet, Bill Gates, Sam Altman, and hundreds of other techĀ Ā 

play04:47

leaders recently signed a 22-word statement thatĀ  shocked me. I'll just read it to you: "MitigatingĀ Ā 

play04:52

the risk of Extinction from AI should be a globalĀ  priority alongside other societal scale risks suchĀ Ā 

play05:00

as pandemics and nuclear war." That is an incredibleĀ  statement, that the development of AI is in theĀ Ā 

play05:06

same realm of risk and importance as destructionĀ  by nuclear war. To better understand why they feelĀ Ā 

play05:12

this way I turn to this survey. This is theĀ  same one that's been widely reported as "halfĀ Ā 

play05:16

of AI researchers give AI a 10% chance of causingĀ  human extinction." The specific question that theyĀ Ā 

play05:22

were asked is, "what probability do you put onĀ  human inability to control future advanced AIĀ Ā 

play05:28

systems causing human extinction..." So what's goingĀ  on here? Well the surveyors summarized an argumentĀ Ā 

play05:34

for why AI might be so dangerous by saying "it'sĀ  essentially the old story of the genie in the lamp,Ā Ā 

play05:39

or the sorcerer's apprentice, or King Midas: You getĀ  exactly what you ask for not what you want."

play05:45

Imagine this: In the future someone creates a powerfulĀ  machine learning system and gives it the desiredĀ Ā 

play05:51

output of a very accurate climate prediction. ThenĀ  the AI, using its self-created rules, figures outĀ Ā 

play05:57

that the more computing hardware it can use theĀ  more accurate its prediction will be. Then itĀ Ā 

play06:02

figures out that by releasing a biological weaponĀ  there would be fewer humans taking up the valuableĀ Ā 

play06:08

computing hardware that it needs. So that's what itĀ  does and then it gives its climate prediction toĀ Ā 

play06:14

no one left. This is the category of thing that theĀ  researchers mean when they say "a system optimizingĀ Ā 

play06:20

a function of n variables will often set theĀ  remaining unconstrained variables to extremeĀ Ā 

play06:25

values." In other words, it might optimize for whatĀ  we tell it to do at the expense of other thingsĀ Ā 

play06:30

that we care about. "You get exactly what you askĀ  for, not what you want." The term that researchersĀ Ā 

play06:36

use for this is "specification gaming" and 82%Ā  of the researchers surveyed agreedĀ Ā 

play06:41

that it was an important or the most importantĀ  problem in AI today. Specification gaming leadingĀ Ā 

play06:46

to disaster becomes less likely if we work toĀ  contain AI systems and we don't let them getĀ Ā 

play06:51

connected to tools that might physically harmĀ  humans. Like don't give them the nuclear codes

play06:57

but how likely is anything like this to actuallyĀ  happen? I honestly don't know and I think neitherĀ Ā 

play07:04

does anyone which is a big reason why all ofĀ  those tech CEOs signed that letter and why youĀ Ā 

play07:09

might have heard people advocating for a pauseĀ  on AI development. However, there are real risksĀ Ā 

play07:14

to not moving forward too. There's a fairly largeĀ  and impressive group of people now advocatingĀ Ā 

play07:20

for a pause on AI development. What do you thinkĀ about that?

play07:23

I think it's a terrible idea and theĀ reason for that is

play07:26

that a pause would give time for ourĀ  competitors which starts with China to catch up.Ā Ā 

play07:31

At the moment the US is in a very strong position.Ā  We have all of the top models, we have the majorityĀ Ā 

play07:37

of the researchers, we have the majority of theĀ  hardware, we have the majority of the data that'sĀ Ā 

play07:41

being used. That's not going to be true forever,Ā  but this is a critical time for us to build thisĀ Ā 

play07:46

technology in American values, liberal values, notĀ authoritarian values."

play07:51

So we've created these tools that have started to become so powerful

play07:55

that we'reĀ concerned about how well they might do what we askĀ Ā 

play07:58

and at the same time every country, every companyĀ  is incentivized to build them first with theirĀ Ā 

play08:04

own interests in mind. But why should we want AIĀ  in the first place? Like what's the goal here??

play08:11

In my view, the most positive extreme case for AIĀ  that I've heard isn't how much better or fasterĀ Ā 

play08:17

it can do the mundane things that we already doĀ  it's how it could leap frog us to do things thatĀ Ā 

play08:22

we can't. You might be wondering, how? Because ofĀ  how incredibly good machine learning systems areĀ Ā 

play08:27

at pattern matching, they can sometimes give usĀ  results that we can verify are correct but weĀ Ā 

play08:32

don't totally understand how it got there. It'sĀ  funny, it's the same skill that scares us is theĀ Ā 

play08:38

one that gives this tool such incredible potential. And if you're feeling a little bit skeptical hereĀ Ā 

play08:43

that's totally fine and understandable, I was too,Ā  until I heard this example: In 2021, researchersĀ Ā 

play08:50

used machine learning on a problem that had upĀ  until very recently been called "one ofĀ Ā 

play08:54

the most important yet unresolved issues of modernĀ  science." It figured out the structure of a proteinĀ Ā 

play09:01

from just amino acid building blocks. For decades,Ā  our best effort to do this has been to spendĀ Ā 

play09:07

hundreds of thousands of dollars per protein toĀ  shoot X-rays at them all in the hopes of learningĀ Ā 

play09:12

just a little bit more about our own bodies andĀ  make better medicines. This is how we got newĀ Ā 

play09:16

treatments for diabetes and sickle cell disease,Ā  breast cancer and the flu, but then researchersĀ Ā 

play09:22

fed pairs of sequences and 3D structures that weĀ  already knew into a machine learning system andĀ Ā 

play09:27

allowed it to learn the patterns between them.Ā  And the result was just incredible. We now haveĀ Ā 

play09:33

predicted 3D structures for nearly all proteinsĀ  known to science, more than 200 million of them.

play09:39

"Deepmind's AlphaFold"

play09:41

"AlphaFold"

play09:41

"AlphFold was able to do in a matter of days what might take years!"

play09:45

"solving an impossible problem in biology..."

play09:49

IĀ get a little emotional just thinking about this

play09:52

about how many people's lives might actually getĀ  better because of this knowledge explosion. And

play09:59

this is just one example of what we've alreadyĀ  been able to do. As machine learning systems

play10:04

get better and better, people have extremely highĀ  hopes about what we might be able to use them for...

play10:08

"We have lots of problems in the world. Think aboutĀ  climate change, for example. Climate change will beĀ Ā 

play10:13

solved to the degree it's solved by using techniquesĀ  that are very complicated and very powerful thatĀ Ā 

play10:18

will have as their basis generative AI. And I thinkĀ that we want that future."

play10:22

After learning more about AI and this moment that we're in

play10:25

I think I'veĀ figured out why it feels so confusing and so hard:

play10:29

We're living inside a trolley problem.

play10:32

DownĀ one path is the status quo, life without AI. But

play10:36

with this incredible new tool we can pullĀ  ourselves onto another path, one that couldĀ Ā 

play10:41

fundamentally change society. But we just don'tĀ  know, at what cost? Will AI give us what we askĀ Ā 

play10:47

for or what we actually want? In this video, we'veĀ  only talked about the most extreme futures withĀ Ā 

play10:52

AI. In other episodes, we're going to go deep intoĀ  specific applications. We'll go full on Huge If True

play10:58

into AI in music and news and robotics andĀ  climate and food and sports and more to exploreĀ Ā 

play11:04

how these tools might transform our world. It'sĀ  easy to dismiss it as crazy when you hear someoneĀ Ā 

play11:08

say that AI might be "more profound than fire orĀ  electricity" and while the cynical side of my brainĀ Ā 

play11:15

wants to say that it's probably true that most ofĀ  the most ambitious AI efforts will likely fail, theĀ Ā 

play11:21

more optimistic Huge If True side of my brain justĀ keeps wondering:

play11:25

What if they actually work?

Rate This
ā˜…
ā˜…
ā˜…
ā˜…
ā˜…

5.0 / 5 (0 votes)

Related Tags
AI FutureMachine LearningTechnology RisksHuman ExtinctionAI BenefitsAI ToolsInnovationEric SchmidtAlphaZeroProtein FoldingClimate ChangeEthicsGenerative AIAI DevelopmentTech Revolution