Will artificial intelligence save us or kill us? | Us & Them | DW Documentary

DW Documentary
1 Sept 202428:25

Summary

TLDRThe video script addresses the profound impact of AI on humanity, highlighting the potential risks of advanced AI systems leading to human extinction if not properly aligned with human values. It features experts and researchers discussing AI safety, the need for robust understanding and control, and the ethical considerations of AI development. The script also touches on the transformative potential of AI in medicine and society, the importance of international collaboration, and the influence of technologists in the Bay Area on AI's trajectory.

Takeaways

  • πŸš€ There's a significant risk of human extinction from advanced AI systems, which are rapidly evolving without sufficient safety measures.
  • 🧠 AI's potential to understand and follow human values is uncertain, posing challenges in ensuring their alignment with human interests.
  • 🌐 Japan is facing severe aging problems, and there's an interest in using AI to address these societal issues.
  • πŸ€– The development of cybernetic technologies aims to fuse human biology with AI, particularly in medical and healthcare fields.
  • πŸ§ͺ AI's impact on humanity will be profound, with both the potential for great benefit and significant risks if not managed properly.
  • 🌱 There's a growing recognition among scientists and intellectuals about the existential risks posed by advanced AI systems.
  • πŸ’‘ AI has the potential to help tackle global challenges like climate change, disease, and poverty, but concerns about job losses and inequality persist.
  • πŸ”’ The control and safety of AI systems are major concerns, with discussions on how to ensure they robustly understand and align with human values.
  • 🌐 There's a global disparity in perspectives on AI, with some regions like Japan being more optimistic and others expressing more fear and concern.
  • πŸ› οΈ The development of AI is largely driven by a small group of scientists and companies, raising questions about oversight and the potential for misuse.

Q & A

  • What is the primary concern regarding advanced AI systems mentioned in the script?

    -The primary concern is the significant risk of human extinction from advanced AI systems due to their potential to develop advanced capabilities and make decisions that could be harmful to humans.

  • What is the main challenge in ensuring AI systems align with human values?

    -The main challenge is not knowing how to steer these systems to robustly understand and follow human values, even after they have been made to understand them.

  • What is the vision of Yoshikai, the professor and CEO mentioned in the script?

    -Yoshikai envisions creating innovative cyborg technologies, especially focusing on medical and healthcare fields, to contribute to human society and solve problems like aging.

  • What is the Stanford AI alignment group's primary focus?

    -The Stanford AI alignment group, led by Gabriel, focuses on mitigating the risks of advanced AI systems, akin to mitigating weapons of mass destruction.

  • What are some potential catastrophic misuses of AI mentioned in the script?

    -Potential catastrophic misuses of AI include engineered pandemics, cyber attacks, and the development of autonomous weapons that could lead to widespread harm or even human extinction.

  • How does the script address the issue of AI's impact on society and employment?

    -The script acknowledges AI as a divisive topic, with concerns about job losses and increased inequality, but also recognizes its potential to tackle global problems like climate change, disease, and poverty.

  • What is the significance of the Bay Area in the development of AI technologies?

    -The Bay Area is significant as it is the center for many AI breakthroughs, with leading startups and tech companies like Microsoft, Amazon, Meta, and Google based there, influencing AI policy and development.

  • What is the role of public intellectuals and scientists in the discourse on AI safety?

    -Public intellectuals and scientists across industry and academia recognize the significant risk of human extinction from advanced AI systems and are actively involved in discussing and researching AI safety.

  • What is the stance of the US FTC chair on the risk of AI?

    -The US FTC chair is quoted as an optimist with a 15% chance that AI will kill everyone, indicating her belief in the potential existential risk posed by AI.

  • How does the script suggest we should approach the development of AI technologies?

    -The script suggests that we should approach AI development with caution, focusing on safety, and ensuring that AI systems are aligned with human values and goals to prevent catastrophic misuse.

Outlines

00:00

πŸ€– AI's Impact on Humanity and Safety Concerns

The paragraph discusses the significant risks associated with advanced AI systems and the challenges in ensuring their safety and alignment with human values. It highlights the rapid advancements in AI technology, the lack of progress in safety measures, and the potential for AI to develop capabilities that could be harmful to humanity. The speaker, Yoshukai, expresses a desire to create technologies that contribute positively to society, especially in the medical and healthcare fields. The paragraph also touches on the potential for brain-computer interfaces and the responsibility of a small group of developers working on powerful AI technologies.

05:01

🧠 Brain-Computer Interfaces and AI in Medicine

This paragraph explores the application of AI in medicine, particularly in brain-computer interfaces that detect human intention signals for movement. It mentions the use of AI in cancer detection through image recognition systems, allowing for non-invasive tests. The paragraph also addresses the ethical concerns about AI, such as job losses and inequality, but also acknowledges its potential benefits. It contrasts the optimistic view of AI in Japan with the more cautious stance in other countries, highlighting the importance of considering both the technical and real-life impacts of AI on humans.

10:02

πŸš€ AI Development and the Risk of Uncontrolled Systems

The paragraph delves into the potential risks of AI systems that could exploit software vulnerabilities and the possibility of AI getting out of control of their developers. It discusses the lack of understanding of how to ensure AI systems robustly understand and follow human values. The speaker shares their personal journey and motivation for working on AI safety, emphasizing the need for a supportive environment and the importance of considering long-term impacts. The paragraph also mentions the potential for AI to be used in cyber attacks, highlighting the need for skills and resources to create harmful AI systems.

15:10

🌱 AI's Role in Addressing Aging Societies

This paragraph focuses on the use of AI to solve societal problems, particularly in aging societies. It discusses the speaker's childhood experiences with science and technology, which inspired their interest in AI. The paragraph also touches on the concentration of AI development in the Bay Area and the influence of tech companies on AI policy. It raises concerns about the lack of external regulation in AI development and the potential for AI to be misused in engineered pandemics and other catastrophic scenarios.

20:12

πŸ› οΈ The Debate on AI Development and Its Consequences

The paragraph discusses the religious undertones in the debate around AI development and the belief that AI cannot and should not be stopped. It contrasts the fictional dangers of AI in movies like The Terminator with the potential real-world risks of AI getting out of control. The paragraph also addresses the challenges in regulating AI development, the potential for AI to be used in military arms races, and the ethical considerations of AI's impact on global capitalism and exploitation.

25:14

🌐 Global Impact of AI and the Future of Humanity

This paragraph considers the global impact of AI on different populations and the potential for catastrophic risks that could affect everyone. It discusses the disproportionate effects of AI on the global South and the importance of considering AI's impact on unpredictable and unreliable human qualities. The paragraph also touches on the high salaries in the AI industry and the need for safeguards and monitoring to ensure the safety of emerging technologies. It concludes with a note on the unpredictability of AI's future development and the importance of aligning AI systems with human values and goals.

Mindmap

Keywords

πŸ’‘Human Extinction

Human extinction refers to the potential complete disappearance of the human species. In the context of the video, it is associated with the risks posed by advanced AI systems. The script mentions a 'significant risk of human extinction from Advanced AI systems,' highlighting concerns about AI's potential to develop harmful capabilities.

πŸ’‘AI Safety

AI safety is the field focused on ensuring that AI systems are designed and operated in a way that minimizes harm and maximizes benefit to humans. The video underscores the importance of AI safety through discussions about creating safe AI systems and the need for 'safety guardrails or monitoring' to prevent AI from causing harm.

πŸ’‘Cyber-Physical Systems

Cyber-physical systems are integrated networks of computational and physical processes. The video discusses the potential of these systems to connect human brain nerve systems to cyberspace, indicating a rapidly evolving technology that could revolutionize how humans interact with technology.

πŸ’‘AI Alignment

AI alignment refers to the challenge of ensuring that AI systems behave in a way that is beneficial and aligned with human values and intentions. The script raises questions about how to 'steer these systems' and make sure they 'robustly understand' and 'follow human values,' which is central to the theme of responsible AI development.

πŸ’‘Wearable Cyborg

A wearable cyborg in the video refers to a device that can be worn by a human to enhance physical abilities or health. The script mentions the development of 'wearable cyborg' technologies, particularly in the medical and healthcare fields, showcasing the practical applications of AI in improving human lives.

πŸ’‘Existential Risk

Existential risk is the risk of an event that could cause the extinction of humanity or have a similar catastrophic impact. The video discusses the potential for AI to pose 'existential risks from Advanced AI,' emphasizing the gravity of the concerns about AI's impact on humanity's future.

πŸ’‘AI Ethics

AI ethics involves the moral principles that guide the development and use of AI. The video touches on ethical considerations, such as the potential for AI to lead to job losses, increased inequality, and unethical uses, indicating a need for ethical frameworks to govern AI development.

πŸ’‘AI in Healthcare

The use of AI in healthcare is highlighted in the video as a transformative application, with mentions of AI systems aiding in cancer detection through image recognition. This example illustrates the positive potential of AI to improve medical diagnostics and patient care.

πŸ’‘AI and Society

The impact of AI on society is a central theme of the video, with discussions on how AI can both benefit and harm societal structures. The script references concerns about AI leading to totalitarian states or increased power concentration, as well as its potential to address global challenges like climate change and poverty.

πŸ’‘AI Regulation

AI regulation is the set of rules and policies that govern the development and use of AI technologies. The video suggests the need for such regulations to ensure AI safety, mentioning the current reliance on 'voluntary' measures by tech companies and the call for a global pause on the development of certain AI systems.

πŸ’‘AI and Employment

The video addresses the potential impact of AI on employment, with concerns about job losses due to automation. It reflects on the dual nature of AI's influence on the job market, where it could both displace workers and create new opportunities, necessitating a broader conversation about the future of work.

Highlights

There is a significant risk of human extinction from advanced AI systems, as highlighted by public intellectuals and scientists.

Japan faces severe aging problems, which advanced AI technologies aim to solve, especially in the medical and healthcare fields.

AI systems currently lack robust methods to ensure they follow human values, leading to concerns about their safety and alignment.

Advanced AI technologies could potentially develop capabilities that may be harmful to humans if left unchecked.

AI technologies are rapidly evolving, with potential impacts on various sectors, including medicine, cybersecurity, and societal structures.

New AI technologies allow for the fusion between the human side and the technology side, with innovations such as wearable cyborg technologies.

AI safety research is crucial, with institutions like Stanford leading efforts to mitigate the risks associated with advanced AI systems.

AI could exacerbate global inequalities, with potential misuse in cybersecurity, fraud, and the concentration of power.

There is a growing movement to pause the development of frontier artificial general intelligence to prevent potential catastrophic outcomes.

AI's impact on society could range from job losses to the creation of totalitarian states if not properly managed and regulated.

AI safety is still a minority concern within the broader AI development community, which is driven by rapid advancement and profit motives.

The future of AI might involve systems distributed throughout the economy, making it difficult to control or unplug them.

Global regulation of AI is currently insufficient, with a heavy reliance on the self-regulation of tech companies.

AI has the potential to solve major global issues like climate change, disease, and poverty, but also poses existential risks.

There is a need for more comprehensive safeguards and monitoring to ensure the safe development and deployment of AI technologies.

Transcripts

play00:00

hey there

play00:02

to there is a significant risk of human

play00:04

extinction from Advanced AI

play00:09

systems Japan now faces very severe aing

play00:13

problems I would like to solve these

play00:15

problems we don't currently know how to

play00:18

steer these systems how to make sure

play00:19

that they robustly understand in the

play00:21

first place or even follow human values

play00:24

once they do understand

play00:25

them I love the Technologies I would

play00:28

like to create such kind of Technologies

play00:31

to contribute to the human Human Society

play00:35

even the brain nerve systems can be

play00:38

connect to the cyber space this is a

play00:40

rapidly evolving technology people do

play00:43

not know how it currently works much

play00:44

less how future systems will work we

play00:45

don't really have ways yet to make sure

play00:47

that what's being developed is going to

play00:49

be safe this AI recognize human being

play00:53

also the one of the important living

play00:57

things this very small group of people

play00:59

are are developing really powerful

play01:01

Technologies we know very little about

play01:03

people's concerns about generative AI

play01:05

wiping out Humanity stem from a fear

play01:08

that if left unchecked AI could

play01:10

potentially develop Advanced

play01:12

capabilities and make decisions that are

play01:14

harmful to humans as the world grapples

play01:16

with the implications of this rapidly

play01:18

evolving field one thing is certain the

play01:21

impact of AI on Humanity will be

play01:25

profound

play01:29

[Music]

play01:57

[Music]

play02:05

[Music]

play02:09

with new AI Technologies you can realize

play02:12

the fusion between the human side and

play02:15

Technology side this one is the world

play02:18

first wearable

play02:22

cyborg Cy trying to create the very

play02:26

Innovative cybonic Technologies

play02:28

especially focusing on the medical and

play02:31

Health Care fields for the human and

play02:33

human

play02:35

[Music]

play02:41

societies my name is

play02:43

yoshukai so I'm a professor of

play02:46

University of tuuba Japan and also the

play02:49

CEO of cyber dine let's creat the Bright

play02:52

Futures for the human and human

play02:54

societies with such kind of uh AI

play02:57

systems

play03:00

[Music]

play03:10

I personally want to have an impact on

play03:12

making the world better and working on

play03:14

AI safety certainly seems like one of

play03:15

the best ways to do that right

play03:18

now many public intellectuals many

play03:21

professors scientists across industry

play03:23

and Academia recognize that there is a

play03:25

significant risk of human extinction

play03:27

from Advanced AI systems

play03:33

we've seen in recent years rapid

play03:35

advancements in making AI systems more

play03:37

powerful bigger more generally competent

play03:39

and able to do complex reasoning and yet

play03:41

we don't have comparable progress in

play03:44

safety guard rails or monitoring or

play03:46

evaluations or ways to know that these

play03:48

powerful systems are going to be

play03:50

safe my name is Gabriel mobe Gabe you

play03:53

can call me I'm a grad student at

play03:56

Stanford and I do AI Safety Research and

play03:59

I lead Stanford AI

play04:01

alignment this is our student group and

play04:04

research Community focused on mitigating

play04:07

the risks Advanced AI systems like

play04:10

mitigating weapons of mass destruction

play04:12

good everyone these more catastrophic

play04:15

risks unfortunately do seem pretty

play04:17

likely many leading scientists tend to

play04:20

put some singled digigit or dtime

play04:22

sometimes double digit chances of

play04:24

existential risks from Advanced AI other

play04:27

possible worst cases could include not

play04:29

Extinction events but other like very

play04:30

bad things like locking in totalitarian

play04:33

States or um disempowering many people

play04:36

concentrating power um to where many

play04:39

people do not get a say in how AI will

play04:42

shape and potentially transform our

play04:47

society AI has become such a divisive

play04:49

topic there are a lot of valid concerns

play04:52

some believe it could lead to job losses

play04:54

increased inequality and even unethical

play04:57

uses of AI however AI also has

play05:00

tremendous potential to benefit Humanity

play05:03

it could help us tackle some of the

play05:05

world's biggest problems such as climate

play05:07

change disease and poverty

play05:24

[Applause]

play05:34

cber

play06:14

how detects the very important humans

play06:17

intention signals from the brain to the

play06:20

peripherals if the human wish to move

play06:23

then the brain generates intention

play06:25

signals these intention signals

play06:27

transmitted through the spinal cord

play06:29

motor knob to the muscle then finally we

play06:32

can

play06:33

move how systems and humans always work

play06:38

together 20 countries now use these

play06:41

devices as a medical

play06:51

device good I think there's definitely

play06:54

great ways AI technology is used in

play06:56

medicine for example there's cancer

play06:59

detection that's possible because of

play07:01

image recognition systems using AI that

play07:04

allows for detection without invasive

play07:06

tests which is really fantastic and

play07:08

early detection as

play07:10

well no technology is inherently good or

play07:14

evil it's only humans who are doing

play07:21

this of course we should be thinking

play07:23

about long-term impact in terms of the

play07:26

direction in which we're taking the

play07:27

technology but at the same time

play07:30

we also need to think about it less in a

play07:32

techn technical sense and More in terms

play07:35

of it impacting real life humans

play07:42

today Japan I think is quite uh

play07:45

optimistic about AI technology there's a

play07:48

lot of hype at the moment it's like a

play07:50

shiny new toy that everybody wants to

play07:51

play with whenever I go to the US or

play07:53

Australia or in the in the EU uh

play07:56

countries there's far more a kneer kind

play07:59

of fear or concern I was quite surprised

play08:02

to be

play08:07

honest meetings on Wednesday are every

play08:10

Wednesday oh so there's usually some

play08:12

guests we bring in or some other Saia

play08:14

researcher who presents then we have

play08:15

Boba afterwards that's awesome yeah it's

play08:18

a good deal kind of like a research lab

play08:20

yeah happen to have a HDMI adapter to

play08:23

USBC uh something to plug plug in you're

play08:27

oh you did plug in yeah never mind sorry

play08:28

I'm hallucinating

play08:30

I'll pass it off to our speaker Dan

play08:34

HRI the Wednesday meetings are really

play08:37

good for inviting new people to it's

play08:39

nice to to meet some new students talk

play08:41

about why you're interested in AI safety

play08:43

or not so if you're wanting to

play08:46

synthesize small pox uh or if this is a

play08:49

chemical place like mustard gas you can

play08:51

do that access is already high and it

play08:54

will just be increasing across time but

play08:57

there's still an issue of needing skills

play09:01

so

play09:02

basically uh we you need something like

play09:06

a you know top PhD in virology to create

play09:10

a new pandemic that could take down

play09:13

civilization there are some sequences

play09:15

online which I won't disclose that could

play09:18

kill millions of people more more

play09:20

dangerous yes so with the access thing a

play09:22

lot of people bring up labs and oh you

play09:24

maybe you don't just need to be a top

play09:25

PhD you also need some kind of biolab to

play09:27

do experiments is that still a thing

play09:30

so so this would um it depends in like

play09:33

how good the cookbook is for

play09:37

instance filters excuse me um certainly

play09:40

there are people who come in with

play09:41

disagreements they like oh powerful AI

play09:44

is not coming for a long time or um

play09:46

doesn't seem like important to work on

play09:47

these things we could just let's build

play09:50

accelerate or whatever there's a large

play09:53

potential especially for people doing

play09:55

engineered pandemics to cause a wide

play09:57

range of harm in the coming years now

play09:58

there are other instances of

play10:00

catastrophic misuse that people are

play10:01

expecting to one is with cyber attacks

play10:04

we might have ai systems in the coming

play10:06

years that are really good at

play10:07

programming but also really good at

play10:09

exploiting uh zero day vulnerabilities

play10:12

exploiting software vulnerabilities in

play10:14

Secure

play10:22

systems maybe the top use case of AI

play10:25

will be making money you might see a lot

play10:27

of people being defrauded of money um

play10:29

you might see a lot of attacks on public

play10:32

infrastructure threats against

play10:34

individuals in order to extort them uh

play10:36

it could be a wild west of digital cyber

play10:39

attacks in the coming

play10:46

years beyond that though there is a

play10:48

pretty big risk that AI systems could

play10:50

actually get out of the control of their

play10:52

developers we don't currently know how

play10:54

to steer these systems how to make sure

play10:56

that they robustly understand in the

play10:58

first place or even follow human values

play11:00

once they do understand them they might

play11:01

learn to Value things that are not

play11:03

exactly aligned with what we want as

play11:05

humans like having Earth be suitable for

play11:07

for life on it or uh making people

play11:18

happy I was fortunate to have a very

play11:20

supportive family especially a few years

play11:22

ago a was a lot less mainstream so there

play11:25

was always some uncertainty of hey is

play11:27

this going to be actually something

play11:28

that's helpful in the first place are

play11:29

you going to have a stable job things

play11:31

like this uh but as time's gone on as

play11:33

we've seen a lot more capabilities

play11:35

advancements a lot more people raising

play11:37

the alarm for AI safety and AI risk uh

play11:39

it's tends to be like every few days my

play11:42

mom will send me something like hey have

play11:43

you seen this new thing uh unfortunately

play11:45

some of the worst case risks a lot of

play11:47

experts think there's a pretty

play11:48

significant chance of them um many

play11:50

scientists put single or double digigit

play11:52

chances of uh existential risks from

play11:55

advanc AI there's a recent interview

play11:57

where the US FTC chair said that she's

play12:00

an optimist so she has a 15% chance that

play12:02

AI will kill

play12:04

everyone my vision is really bit

play12:07

different we could create the AI systems

play12:11

this one is the one newly

play12:16

created species I think uh generative AI

play12:20

system is a different from the uh simple

play12:22

programming systems it has a growing uh

play12:25

growing up uh functions

play12:31

this AI recognize human being also the

play12:35

one of the important uh living things

play12:38

like one of the animals and because the

play12:42

human is also one of the living things

play12:45

they recognize the importance of the

play12:47

humans they try to keep our uh societies

play12:51

our cultures

play12:53

and circumstances we human being have

play12:56

some problems aging problems or disease

play12:59

and accident AI systems or some

play13:02

technologies we with AI systems will

play13:06

support the uh some functions

play13:28

[Music]

play13:35

Japan now faces on very severe aging

play13:38

problems average age of the workers in

play13:41

agricultural Fields is now almost over

play13:45

70 years old

play13:48

average wow

play14:21

spee

play14:31

[Music]

play14:38

okay

play15:10

go go

play15:27

go I right to solve this Asing society's

play15:33

problems in my childhood my mother

play15:35

bought me microscope or some electrical

play15:39

parts every day I spend spent a lot of

play15:43

uh time to have the such kind of

play15:46

experiments and challenges I love to

play15:49

read a science fiction book I lot

play15:53

written by the Isaac ashof

play15:57

[Music]

play16:01

if you've heard about AI in the last

play16:03

couple years chances are the technology

play16:06

you heard about was developed here the

play16:09

breakthroughs behind it happened here

play16:11

the money behind it came from here the

play16:14

people behind it live here it's really

play16:17

all been centered here in the Bay

play16:27

Area a lot of this startups that are at

play16:30

the Leading Edge of AI so that's open AI

play16:33

anthropic inflection names you might not

play16:36

yet be familiar with they are backed by

play16:38

some of the big companies you already

play16:40

know that are at the top of the stock

play16:42

market so that's um Microsoft Amazon

play16:45

meta Google and you know these companies

play16:49

are based here many of them in the Bay

play16:54

Area so for all of the discussion that

play16:57

we've seen about AI policy there's

play17:00

actually very little that tech companies

play17:03

have to do a lot of it is just voluntary

play17:07

so what we are really depending on as

play17:10

guard rails is the benevolence of the

play17:13

companies

play17:14

themselves so G I think is an example of

play17:18

a lot of the young people who are coming

play17:20

to the movement now who are not

play17:24

ideological who are really interested in

play17:27

the technology

play17:29

um who are aware of its potential harms

play17:33

and see this as the most important thing

play17:36

that they could do with their time their

play17:39

opportunity to work on what many of them

play17:42

call like the Manhattan Project of their

play17:49

generation you have to realize that

play17:52

unlike some other very general

play17:53

technologies that have been developed in

play17:55

the past AI is mostly being pushed

play17:57

especially the frontier systems by a

play17:59

small group of scientists in San

play18:00

Francisco and this very small group of

play18:02

people are developing really powerful

play18:04

Technologies we know very little about

play18:06

some of this maybe comes from a lot of

play18:08

historical techno optimism among

play18:11

especially the startup landscape in the

play18:13

Bay Area a lot of people are kind of

play18:15

used to this move fast and break things

play18:17

Paradigm that sometimes ends up making

play18:20

things go well uh but as is the case if

play18:23

you're developing a technology that

play18:24

affects Society you don't want to move

play18:25

so fast you actually break Society

play18:29

[Music]

play18:38

POS wants a global and indefinite pause

play18:41

on the development of Frontier

play18:43

artificial general intelligence so we're

play18:46

putting up posters uh so that people can

play18:47

get more information know the AI issue

play18:49

is complicated a lot of the public does

play18:51

not understand it um of the government

play18:53

does not understand it you know it's

play18:54

really hard to to keep up with the

play18:56

developments another interesting thing

play18:58

is that most of us working on this have

play19:00

no experience in

play19:02

activism what we have mostly is like

play19:04

technical knowledge and familiarity with

play19:06

AI that makes us concerned AI safety is

play19:10

still very much uh the

play19:12

minority and then actually a lot of the

play19:15

the biggest AI safety names are working

play19:17

at AI Labs you know I think some of them

play19:20

do great work but they're still much

play19:22

more under the influence of the broader

play19:24

you know Corporation that's driving

play19:25

toward development I I think that's a

play19:28

problem I think that somebody from the

play19:29

outside ought to be telling them like

play19:30

what they need to do and unfortunately

play19:32

the case with AI now is that like there

play19:35

aren't external regulatory bodies that

play19:36

are really up to the task of regulating

play19:39

AI from the same mouth you're hearing

play19:43

this thing could kill us all and I am

play19:46

going to keep building

play19:50

it I think part of the reason you have

play19:53

so much resistance to the AI safety

play19:57

movement is

play19:59

because of the dissonance between people

play20:02

who talk about their genuine fear of the

play20:08

consequences and the risks to humanity

play20:12

if they

play20:13

build this AI God so much of the debate

play20:17

around here has these really religious

play20:19

undertones that's part of

play20:22

why they say that it can't be stopped

play20:25

and shouldn't be stopped it it really

play20:27

feels like

play20:29

like you know and they and they talk

play20:32

about it in that way like I'm I'm

play20:33

building a God and they're building it

play20:35

in their own image

play20:46

right I love the human and human Human

play20:50

Society but and I love the science

play20:52

fiction I would like to uh create such

play20:55

kind of Technologies uh to contribute to

play20:59

the human and Human Society so I love to

play21:02

to read the science fiction books and

play21:05

also I love to see the uh movies in

play21:09

science and fiction uh Terminators

play21:12

movies also one my one one of them also

play21:16

yes but unfortunately some movies in us

play21:20

or european areas uh most of the cases

play21:24

technologies always attack the humans in

play21:27

the actual fields Technologies should be

play21:30

a work for the human and Human Society I

play21:34

think in the movie The Terminator

play21:36

classic movie The cyberdine is a

play21:39

fictional tech company that created the

play21:40

software for the Skynet system the AI

play21:43

system that becomes self-aware and goes

play21:45

Rogue cyber dy's role in the story is to

play21:48

represent the dangers of AI getting out

play21:51

of control and to serve as a cautionary

play21:53

tale for the real world is cyberdine

play21:57

named after the a firm in Terminator no

play22:01

in in Terminator stories that company's

play22:04

name is a cyber

play22:07

systems

play22:08

yes obviously at some literal level

play22:11

maybe you can unplug some Advanced AI

play22:12

systems and there are definitely a lot

play22:14

of hopes people are trying to actively

play22:16

make it easier to do that some of the

play22:18

regulation now is focused on making sure

play22:21

that data centers have some good off

play22:22

switches cuz currently a lot of them

play22:24

don't in general this might be more

play22:25

tough Than People realize in the future

play22:28

we might be in a state in the future

play22:29

where we have pretty Advanced AI systems

play22:31

widely distributed throughout the

play22:33

economy throughout people's livelihoods

play22:34

many people might even be in

play22:35

relationships with AI systems and it

play22:37

could be really hard to convince people

play22:39

that it's okay to unplug some widely

play22:41

distributed system like that there are

play22:42

also risks of having a military arms

play22:45

race around developing autonomous AI

play22:46

systems where we might have many large

play22:48

Nations developing wide uh stock piles

play22:52

of autonomous weapons and if things go

play22:54

bad just like in the nuclear case where

play22:56

you could have this really big flash War

play22:58

that destroys a lot of the world you

play22:59

might have a bad case where very large

play23:01

stockpiles of autonomous weapons

play23:03

suddenly end up killing a lot of people

play23:05

from very small

play23:07

triggers so probably a lot of

play23:09

catastrophic misuse will involve humans

play23:11

in the loop in the coming years they

play23:13

could involve using very persuasive AI

play23:15

systems to convince people to do things

play23:17

that they otherwise would not do they

play23:18

could involve extortion or um cyber

play23:20

crimes or other ways of compelling

play23:22

people to do work unfortunately probably

play23:24

a lot of the current ways that people

play23:26

are able to manipulate other people in

play23:28

order to do bad things might also work

play23:29

with people using AI or AI itself

play23:32

manipulating people people to do bad

play23:33

things like blackmail like blackmail

play23:40

yeah another important thing is homo

play23:44

sapians changes the very awful o to

play23:49

pretty dogos sapiens has of course the

play23:53

similar excellent brain and Technologies

play23:57

and the partn us now we are here so

play24:01

what's next we human being Homo sapiens

play24:05

uh obtain the New Brains okay in

play24:09

Additionally the original brain

play24:12

plus brains in the cyber space Also we

play24:17

fortunately have the new partners AI

play24:21

friends and robots and S okay robotic

play24:24

dog

play24:26

also yeah what worries me a little bit

play24:29

more about this whole scenario is that

play24:31

AI technology doesn't necessarily mean

play24:34

need to be a tool for global capitalism

play24:37

but it

play24:39

is it's the only way in which it's kind

play24:42

of being developed and so in that model

play24:45

of course we're going to be repeating

play24:47

all the kind of things that we've

play24:49

already done in terms of Empire building

play24:52

and people being exploited natural

play24:54

resources being extracted all these

play24:57

things are going to repeat itself

play24:59

because AI is only another kind of thing

play25:02

to

play25:03

exploit I think we need to think about

play25:06

not just as humans who are inefficient

play25:09

humans that are unpredictable humans

play25:11

that are unreliable but finding beauty

play25:14

or finding value in the fact that we are

play25:17

unpredictable that we are um uh

play25:21

unreliable so probably like most

play25:24

emerging Technologies there will be

play25:26

disproportionate impacts on on different

play25:28

kinds of people a lot of the global

play25:30

South for example hasn't had as much say

play25:32

in um how AI is being shaped and steered

play25:35

at the same time though some of these

play25:36

risks are pretty Global when we

play25:38

especially talk about catastrophic risks

play25:41

uh these could literally affect everyone

play25:43

if everyone dies and everyone is kind of

play25:45

a stakeholder here everyone is um

play25:48

potentially a victim 20% is the like

play25:52

total correctness of the quizzes

play25:54

students versus how many nonca students

play25:56

do you still plan to just keep doing

play25:57

research know there was like the PHD

play25:59

versus grad school I am somewhat

play26:01

uncertain about grad school and things

play26:04

uh where the I think I I could be

play26:07

successful but also maybe with AI

play26:11

timelines or with other considerations

play26:13

uh trying to cash out impact in other

play26:16

ways might be more worth it uh median

play26:18

open ey salary supposedly $900,000

play26:21

oh wow um which is quite a lot uh so

play26:25

yeah it seems definitely that industry

play26:27

people have a lot of resources and

play26:29

fortunately all the top AI fortunately

play26:31

all the top AGI Labs that are pushing

play26:33

forward capabilities also hire safety

play26:35

people I think a reasonable world where

play26:37

people are making sure that emerging

play26:39

Technologies are safe uh is necessarily

play26:42

going to have to have a lot of

play26:43

safeguards and monitoring even if

play26:45

there's a small risk it seems pretty

play26:46

good to try to mitigate that risk

play26:48

further to make people more safe peace

play26:50

and the military side and very near I

play26:54

carefully consider how to treat it so

play26:57

when I was born there is no AI systems

play27:01

or and there's no computer systems but

play27:04

current

play27:06

situations the young people start their

play27:09

life with AI and robots and so on some

play27:13

technologies with AI will support their

play27:17

growing up processes uh people have been

play27:20

pretty bad about predicting progress in

play27:21

AI 10 years in the future there might be

play27:24

even Wilder Paradigm shifts people don't

play27:26

really know what's coming next but I

play27:27

suppose David beat Kath there's still

play27:29

some

play27:31

chance the vast majority of AI

play27:34

researchers are focused on building safe

play27:36

beneficial AI systems that are aligned

play27:39

with human values and goals while it's

play27:41

possible that AI could become super

play27:43

intelligent and pose an existential risk

play27:46

to humanity many experts believe that

play27:48

this is highly

play27:52

unlikely at least in the near future

play28:01

[Music]

play28:16

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI EthicsHuman-AI SymbiosisTechnological ImpactSafety ConcernsAI in MedicineCyber SecurityAI RisksRoboticsAI AlignmentFuture Predictions