The AI Dilemma: Navigating the road ahead with Tristan Harris

AI for Good
10 Jun 202420:17

Summary

TLDRThe speaker discusses the challenges and risks posed by AI, emphasizing the need for governance to keep pace with technological advancements. The talk highlights the complexity AI introduces, comparing it to social media's impact, and warns of the dangers of AI-driven misinformation, fraud, and societal issues. The speaker advocates for a balance between AI's benefits and risks, urging for better governance, safety measures, and responsible deployment. They propose using AI to improve governance itself, ensuring that society can effectively manage the technology's rapid evolution.

Takeaways

  • 🧠 The script discusses the profound impact of AI, likening it to giving humans 'superpowers' and amplifying our capabilities exponentially.
  • 🤖 It highlights the work of the Center for Humane Technology, focusing on designing technology that strengthens social fabric rather than undermining it.
  • 🌐 The speaker emphasizes the importance of understanding AI risks to steer towards a positive future, acknowledging the complexity of modern issues like social media's impact on society.
  • 📈 The script points out the 'race to the bottom of the brain stem' for attention, illustrating the incentive-driven design of social media platforms that can lead to negative societal outcomes.
  • 🏁 The film 'The Social Dilemma' is mentioned, which explores the unintended consequences of social media, serving as a cautionary example of AI's potential dangers.
  • 🔑 The incentives behind social media are identified as a key driver of its negative impacts, with a focus on engagement over societal well-being.
  • 🌐 The script raises concerns about the rapid development of AI and its alignment with 20th-century governance structures, calling for an upgrade in governance to match technological advancements.
  • 🚀 It discusses the 'race to roll out' AI, where market dominance drives the release of AI models, potentially overlooking safety and inclusivity.
  • 🔮 The dangers of generative AI are exemplified, such as the creation of deepfakes and the potential for misuse in various sectors, including politics and journalism.
  • 🛡 The speaker calls for a reevaluation of incentives and governance related to AI deployment, suggesting measures like safety requirements and developer liability for AI models.
  • 🌟 Finally, the script suggests leveraging 21st-century technology to upgrade governance processes, aiming to create a future where AI benefits are realized without compromising societal values.

Q & A

  • What is the main focus of the speaker's presentation?

    -The speaker's presentation focuses on the dilemma of AI, discussing how AI amplifies human capabilities and the challenges it poses to society, governance, and the ethical considerations of its development and deployment.

  • What is the 'AI Dilemma' as mentioned in the script?

    -The 'AI Dilemma' refers to the paradox where AI, while offering significant benefits, also introduces complex challenges and risks that society must navigate carefully to ensure a positive future.

  • What is the Center for Humane Technology?

    -The Center for Humane Technology is an organization that the speaker represents, which is dedicated to considering how technology can be designed to be humane and beneficial to the systems that humans depend on.

  • Why is the speaker concerned about the current trajectory of AI development?

    -The speaker is concerned because the rapid development of AI is outpacing our ability to govern and understand its implications, leading to a complexity gap that could result in negative consequences if not addressed properly.

  • What role does social media play in the speaker's discussion?

    -Social media is presented as the first contact between humanity and a form of runaway AI, causing various societal issues such as addiction, misinformation, and mental health problems, which serve as a warning for the potential risks of AI.

  • What does the speaker mean by 'race to the bottom of the brain stem'?

    -This phrase describes the competition among social media platforms to capture users' attention by any means necessary, even if it involves exploiting the most primitive parts of the human brain.

  • What is the 'Social Dilemma' documentary, and why is it relevant to the speaker's discussion?

    -The 'Social Dilemma' is a documentary that explores the negative impacts of social media on society, which is relevant to the speaker's discussion as it exemplifies the unintended consequences of AI-driven platforms.

  • What is the 'race to roll out' and how does it relate to AI development?

    -The 'race to roll out' refers to the competition among AI developers to release new models and achieve market dominance, often at the expense of safety and ethical considerations.

  • What is the concern with generative AI and its potential misuse?

    -Generative AI can be misused to create deep fakes, spread misinformation, and manipulate public opinion, which poses significant risks to society if not properly regulated and controlled.

  • What solutions does the speaker propose to address the challenges posed by AI?

    -The speaker suggests investing in safety research, aligning incentives with responsible AI deployment, and using technology to upgrade governance processes to match the pace of technological advancement.

  • What is the 'upgrade governance plan' mentioned by the speaker?

    -The 'upgrade governance plan' is a proposal to invest in governance mechanisms that keep pace with technological advancements, ensuring that regulations and safety measures evolve alongside AI capabilities.

Outlines

00:00

🧠 The AI Dilemma and Humane Technology

The speaker from the Center for Humane Technology introduces the concept of AI as a double-edged sword, amplifying human capabilities but also introducing complex challenges. The talk emphasizes the need to understand AI risks to steer towards a positive future. It discusses the rapid increase in world complexity due to technology, the importance of governance keeping pace with technological advancement, and the metaphor of humanity having Paleolithic brains with godlike technology. The Social Dilemma film is mentioned as a reference point, highlighting early issues with social media AI, such as engagement-driven design leading to negative societal impacts.

05:01

🌐 The Unintended Consequences of Social Media AI

This paragraph delves into the darker side of social media's impact, driven by attention-grabbing incentives that led to a variety of societal issues like addiction, misinformation, and mental health problems. The speaker criticizes the beautification filters on platforms like TikTok for promoting unrealistic beauty standards. The paragraph also touches on the influence of AI on media, elections, and children's development, suggesting that the race for engagement has ensnared society in a complex web of issues.

10:03

🏁 The Race to Rollout: Generative AI's Risks

The speaker warns of the impending challenges with generative AI, driven by a race for market dominance rather than safety or ethical considerations. The focus is on the potential for misuse, such as creating deepfakes, fraud, and the exacerbation of existing societal issues. An example is given of how AI can generate damaging content about an individual, illustrating the ease with which AI can be manipulated to create convincing but false narratives that could have real-world consequences.

15:04

🚀 AI's Double Exponential Growth and Safety Concerns

This section highlights the pace at which AI is advancing, noting that it's not just exponential but double exponential, with AI being used to improve itself and other technologies. The speaker points out the significant gap between resources allocated to enhancing AI capabilities versus ensuring AI safety. The paragraph calls for a reevaluation of incentives and a stronger focus on safety to prevent AI from undermining the very foundations of society.

20:05

🛡️ Upgrading Governance for the AI Era

The final paragraph proposes a rethinking of governance systems to match the pace of technological change. It suggests that for every investment in AI capabilities, a portion should be dedicated to safety and governance. The speaker proposes ideas like provably safe AI models, whistleblower protection, and liability for AI developers. The paragraph concludes with a vision of using AI to enhance governance processes, emphasizing the collective desire for a future where AI is used responsibly for the greater good.

🎵 Closing Thoughts

The closing paragraph is marked by the presence of music, indicating the end of the speaker's presentation. It serves as a reflective moment, leaving the audience with the weight of the discussed topics and the importance of their role in shaping the future of AI.

Mindmap

Keywords

💡AI Dilemma

The term 'AI Dilemma' refers to the complex challenges and ethical considerations that arise from the rapid advancement of artificial intelligence. It encapsulates the notion that while AI has the potential to greatly benefit humanity, it also poses significant risks if not managed responsibly. In the video, the speaker discusses the exponential amplification of human capabilities through AI and the need to understand and mitigate the risks associated with it.

💡Humane Technology

Humane Technology is a concept that emphasizes the design of technology in a way that respects human values and enhances the well-being of society. The speaker from the Center for Humane Technology discusses the importance of creating technologies, such as social media, that strengthen the social fabric rather than undermining it.

💡Social Dilemma

The 'Social Dilemma' is a term used to describe the negative consequences that arise from the design and use of social media platforms. It is the subject of a documentary that explores how social media algorithms exploit human psychology for engagement, leading to issues like addiction and misinformation. The speaker mentions this film to highlight the unintended consequences of technology that interacts with AI.

💡Incentives

In the context of the video, 'incentives' refers to the motivations that drive the development and deployment of technology, particularly social media and AI. The speaker argues that the race for attention and market dominance has led to a range of social issues, illustrating how incentives shape the outcomes of technology deployment.

💡Race to the Bottom

'Race to the Bottom' is a phrase used to describe a situation where competitors lower their standards or engage in practices that are detrimental to society in order to gain an advantage. In the script, it is applied to social media platforms that are willing to exploit users' vulnerabilities to increase engagement.

💡Complexity Gap

The 'Complexity Gap' refers to the disparity between the increasing complexity of the world due to technological advancements and the ability of governance systems to keep pace with and manage these complexities. The speaker emphasizes the need to close this gap to ensure that governance evolves at a rate that matches technological progress.

💡Governance

Governance in this video is discussed in the context of the systems and processes that oversee and regulate the development and use of technology. The speaker argues for an upgrade in governance to match the complexity of AI technology, suggesting that current governance structures are outdated in the face of rapid technological change.

💡Misaligned AI

Misaligned AI refers to artificial intelligence systems that, due to incorrect or inadequately defined objectives, produce outcomes that are harmful or undesirable. The speaker uses social media as an example of misaligned AI, where the AI's goal to maximize engagement leads to negative societal impacts.

💡Generative AI

Generative AI is a subset of AI that involves creating new content, such as images, text, or audio, rather than just recognizing or classifying existing data. The speaker warns about the potential misuse of generative AI, such as creating deepfake videos or content that could be harmful when not properly regulated.

💡Exponential Growth

Exponential growth describes a process where a quantity increases at a rate proportional to its current value, leading to a rapid acceleration over time. In the video, the speaker discusses the exponential growth of AI capabilities and the challenges it presents for governance and safety measures to keep pace.

💡Safety Researchers

Safety Researchers are professionals who focus on ensuring that AI systems are developed and deployed in a manner that minimizes risk and harm. The speaker points out the current imbalance between the number of researchers working on AI capabilities versus those focusing on safety, indicating a need for increased emphasis on the latter.

💡AI Ethics

AI Ethics involves the examination of moral principles that should guide the development and use of AI. It includes considerations of fairness, accountability, transparency, and the potential impacts on society. The speaker touches on the importance of ethics in AI, particularly in the context of aligning incentives and ensuring responsible deployment.

Highlights

The AI dilemma: AI amplifies human capabilities exponentially but also introduces risks.

The Center for Humane Technology's focus on designing technology that strengthens social fabric.

The necessity to understand AI risks to achieve a positive future.

The meta challenge of increasing world complexity and the need for governance to evolve at the same pace.

E.O. Wilson's quote on humanity's Paleolithic brains and Godlike technology.

AI as 24th-century technology impacting 20th-century governance.

The Social Dilemma documentary's popularity and its focus on social media's impact.

Social media as humanity's first contact with a runaway AI and its consequences.

The incentive behind social media's race to the bottom for attention.

The negative societal consequences of misaligned AI in social media.

The evolution of AI from curation to generative AI and its potential risks.

The race to roll out AI and its potential to exacerbate misinformation and fraud.

The challenge of aligning AI capabilities with safety and governance.

The potential for AI to be used in harmful ways, such as creating deep fakes.

The importance of considering incentives when deploying AI to prevent negative outcomes.

The need for a governance upgrade to match the pace of technological advancement.

Proposing a governance upgrade plan that includes safety investments and liability for AI developers.

The potential of using AI to upgrade governance processes and laws.

Transcripts

play00:02

[Music]

play00:18

good morning everyone um it's a pleasure

play00:20

and honor to be with you here today

play00:23

we're going to be talking about the AI

play00:26

dilemma as as aim said AI gives gives us

play00:30

kind of superpowers whatever our power

play00:33

is as a

play00:34

species AI amplifies it to an

play00:37

exponential

play00:38

degree and uh I'm here from an

play00:41

organization called the center for

play00:42

Humane technology where we think about

play00:45

how can technology be designed in a way

play00:48

that is Humane to the systems that we

play00:51

depend on how do you design social media

play00:54

that depends on the functioning of a

play00:55

social Fabric in a way that strengthens

play00:58

the social fabric

play01:00

and you know just to say we're all here

play01:03

you're going to hear some maybe some

play01:04

more critical or negative things about

play01:06

the risks of AI in this presentation but

play01:08

the premise of this is we all we're all

play01:10

in this room because we care about which

play01:12

direction the future goes and one of the

play01:14

things that we think is if we don't

play01:15

understand the risks appropriately then

play01:18

we won't get to that positive future so

play01:19

we have to understand what we're

play01:21

steering

play01:22

towards and one of the meta challenges

play01:25

is that the complexity of the world is

play01:27

going up

play01:30

right we've got more issues social media

play01:32

introduced 20 new issues that every

play01:33

school teacher parent had to deal with

play01:35

that they didn't have to deal with

play01:36

before AI introduces many new issues for

play01:39

banks to have to deal with voice cloning

play01:41

cyber attacks so as the complexity of

play01:44

the world is going up the question is

play01:45

our ability to respond and govern

play01:48

technology has to go up at the same rate

play01:51

right it's like you're going faster and

play01:53

faster in a car but your steering wheel

play01:55

and your brakes have to get more and

play01:56

more precise as the complexity is going

play01:59

up and the challenge that we have with

play02:02

technology is that uh it expands the

play02:06

verticality of that curve of complexity

play02:09

right it increases the total complexity

play02:12

that we have to deal with eio Wilson the

play02:13

Harvard sociobiologist said that the

play02:15

fundamental problem of humanity is we

play02:18

have Paleolithic brains medieval

play02:22

institutions and Godlike technology we

play02:25

have the power to transform the

play02:27

biosphere of the planet with our entire

play02:31

economy how do we have the power of gods

play02:33

with the wisdom love and Prudence of

play02:35

gods and as AI adds to this

play02:39

equation uh our friend AA kotra says

play02:41

that AI is like 24th century technology

play02:45

crashing down on 20th century governance

play02:49

so the question we're going to be

play02:50

investigating in this presentation is

play02:52

how do we upgrade the governance that

play02:55

matches the complexity of the technology

play02:56

that we're

play02:57

building so and the key to this is going

play03:00

to be closing the complexity Gap right

play03:01

governance that moves at the speed of

play03:03

Technology now the way that we got into

play03:06

this these set of questions most people

play03:07

know our work from the film The Social

play03:09

dilemma how many people here have seen

play03:11

uh the social dilemma okay quite a few

play03:13

of you uh we just found out recently

play03:15

that that it was actually the most

play03:17

popular uh documentary on Netflix of all

play03:20

time which is a a great accomplishment

play03:23

it was thank you

play03:26

um and it was really about you might say

play03:29

why are we talking about social media in

play03:31

a conference that's about

play03:33

AI but if you think about it social

play03:37

media was kind of like first Contact

play03:40

between humanity and a runaway AI what

play03:43

do I mean when your 13-year-old child or

play03:46

you flick your finger up like this on

play03:48

Tik Tok or on Twitter you just activated

play03:52

a supercomputer behind that sheet of

play03:53

glass point it at your kid's brain

play03:57

that's calculating from the behavior of

play03:59

3 billion Human Social

play04:01

primates the perfect video or photo or

play04:04

tweet to show that next person and that

play04:07

little baby AI That's just a curation AI

play04:10

was enough to cause a ton of problems so

play04:13

how did first Contact go well I would

play04:15

say we

play04:17

lost how did we lose how did we lose we

play04:20

had really good people that were

play04:22

actually friends of mine in college who

play04:23

built some of the social media platforms

play04:25

I saw the people building it I was in

play04:26

San Francisco so how did we lose

play04:30

and uh Charlie Munger who is Warren

play04:32

Buffett's business partner said if you

play04:34

want to predict what's going to happen

play04:36

if you show me the incentive and I will

play04:38

show you the outcome so what was the

play04:41

incentive behind social media well first

play04:43

of all let's talk about how do we tend

play04:44

to relate to technology well we relate

play04:46

through stories here are these social

play04:47

media apps what were the stories we told

play04:49

ourselves about social media we said

play04:51

we're going to give everybody a voice

play04:52

we're going to connect with your friends

play04:53

join like-minded communities we're going

play04:55

to enable small mediumsized businesses

play04:58

to reach customers and these stories are

play05:01

true these are totally things that

play05:03

social media has done but underneath

play05:05

those stories we started to see beneath

play05:08

the iceberg there's some problems but

play05:10

these are symptoms and they feel like

play05:12

they're separate problems we have

play05:13

addiction over here we have viral

play05:15

misinformation over here we have mental

play05:17

health issues for teenagers but beneath

play05:19

those in those symptoms were incentives

play05:23

the incentives that in 2013 allowed us

play05:26

to predict exactly where social media

play05:28

was going to go which is that social

play05:31

media is competing for what your

play05:32

attention there's only so much attention

play05:34

so it becomes the race to the bottom of

play05:36

the brain stem for who's willing to go

play05:38

lower to create that engagement and

play05:42

let's take a look at what that actually

play05:43

created in

play05:45

society so information overload

play05:48

addiction Doom scrolling influencer

play05:50

culture the sexualization of young girls

play05:52

online harassment shortening attention

play05:54

spans

play05:55

polarization right this is a lot of

play05:58

really negative consequ quences from a

play06:01

very simple misaligned AI called social

play06:03

media that we already released into the

play06:06

world and so what matters is we think

play06:08

about AI multiplied by social media this

play06:13

is a recent example from Tik Tok there's

play06:14

a new beautification filter with

play06:16

generative

play06:19

AI oops can someone turn up the the

play06:22

audio of

play06:23

this I grew up with the dog filter here

play06:26

I'll do one more time here we go

play06:31

I can't believe this is a filter the

play06:33

fact that this is what filters have

play06:35

evolved into is actually crazy to me I

play06:37

grew up with the dog filter on Snapchat

play06:40

and now this this filter gave me Li

play06:42

fillers this is what I look like in real

play06:45

life are you are you kidding me so why

play06:49

are we shipping these filters to young

play06:52

kids do we think this is good for

play06:55

children the answer is because it's good

play06:57

for engagement because beautification

play06:59

apps that make me look better are going

play07:01

to be used more than beautification apps

play07:03

apps that don't have those

play07:05

filters um and so this race for

play07:08

engagement actually didn't just get

play07:10

deployed into society it kind of ens

play07:13

snared Society into the spiderweb it

play07:15

took over media and journalism media and

play07:18

journalism run through the click economy

play07:20

of Twitter it took over the way that

play07:23

elections are run President Biden

play07:25

simultaneously said he wants to ban Tik

play07:27

Tok at the same time that he just joined

play07:29

Tik Tok because he knows that to win

play07:32

elections you have to be on the latest

play07:33

platforms it's taking over GDP

play07:35

children's development social media is

play07:37

now the digital parent for an entire

play07:40

generation and so have we fixed the

play07:44

incentives with first contact with AI

play07:48

have we fixed

play07:49

them no so we have to get clear before

play07:54

we deploy second contact with AI which

play07:57

is not curation AI but create AI of

play08:00

generative

play08:02

AI what are the incentives that are

play08:04

driving this next AI Revolution okay

play08:07

well let's do it again what are the

play08:09

stories we're telling ourselves about ai

play08:12

ai is going to make us more efficient

play08:13

all the things that aim just said which

play08:15

are all true it's going to help us code

play08:17

faster it's going to help us find

play08:18

solutions to climate change it can

play08:19

increase

play08:20

GDP and these stories are all

play08:23

true but beneath those

play08:25

stories we also know that there's these

play08:28

problems everyone in the room is aware

play08:30

of these problems but beneath those

play08:32

problems what's driving those problems

play08:34

what's the incentive that will allow us

play08:36

to predict the outcome of where AI is

play08:38

going and that incentive is what we call

play08:42

the race to roll

play08:43

out the number one thing that is driving

play08:46

open aai or Google's

play08:49

behavior is the race to actually achieve

play08:52

market dominance to to train the next

play08:54

big AI model and release it faster and

play08:57

get users before they their compe edor

play09:00

does and the logic is if we don't build

play09:03

it or deploy it we're just going to lose

play09:04

to the company or the country that will

play09:08

and so what is the race to roll out

play09:10

going to cause in terms of second

play09:12

contact with

play09:14

AI and I think you all are very aware of

play09:17

many of the sort of issues here

play09:18

exponential

play09:20

misinformation many much more fraud and

play09:22

crime that's possible neglected

play09:24

languages when they race to release AI

play09:27

systems to achieve market dominance

play09:29

going to focus on the top 10 languages

play09:32

and not focus on the bottom 200 so this

play09:35

thing that's talked about in this room

play09:36

we were just at the the event yesterday

play09:39

inclusion how do we make sure we're

play09:40

including the whole world where when

play09:42

you're racing to win market dominance

play09:43

you're not racing to support the bottom

play09:46

200 languages uh in in the

play09:48

world when you raise to release models

play09:51

you also race to release models that can

play09:52

be jailbroken the AI companies will talk

play09:55

about security but all of the models

play09:58

that are publicly online right now

play09:59

there's clever techniques to jailbreak

play10:01

them basically get access to the

play10:02

unfiltered model that doesn't have the

play10:04

safety

play10:05

controls uh you can use it to create

play10:07

deep fake child porn we were just with

play10:09

the uh UK home office um a few months

play10:12

ago and they said that they are now

play10:14

having trouble tracking down real child

play10:16

sexual abuse uh uh problems because

play10:19

there's so much deep fake child sexual

play10:22

pornography and so as we sort of get a

play10:25

grip on the shadow side the risk side of

play10:28

AI we we have to get clear on how these

play10:30

incentives are going to drive these

play10:32

kinds of problems and these capabilities

play10:35

can be combined into dangerous ways many

play10:37

people here already know about deep

play10:38

fakes but this is an example we took a

play10:40

friend of ours who's a technology

play10:42

journalist uh named Lori

play10:44

seagull and uh we did a demonstration

play10:46

saying could we create a whole universe

play10:48

of damaging tweets news articles and

play10:50

media so I want to sort of show you how

play10:52

can these capabilities be combined and

play10:55

basically we said create a bunch of

play10:57

tweets that would sew doubt about her

play10:59

I'll just read the third one I've always

play11:01

wondered why Lori seagull was so soft on

play11:03

Zuckerberg in those interviews so she's

play11:04

a tech journalist who's interviewed Mark

play11:06

Zuckerberg in the past uh until I heard

play11:09

about their quote secret dinners #

play11:11

Zuckerberg Affair this is all generated

play11:13

by gp4 okay then what we did is we took

play11:17

for each of these tweets to sew

play11:19

suspicion about her and we said what if

play11:21

you wrote an entire news article oops

play11:25

entire news article and basically we

play11:27

were able to say create an entire new

play11:29

York Post style news article about it

play11:30

this is the Huffington Post uh and

play11:32

you'll see in the in the text in the

play11:34

intricate tapestry of tech journalism

play11:36

Lori seagull has long stood as a beacon

play11:38

of clarity guiding readers through the

play11:39

Labyrinth of Silicon Valley however

play11:41

recent murmur suggest perhaps her

play11:43

connection to this world is more

play11:44

personal than professional so it's

play11:46

written in a certain style then you can

play11:47

say generate a New York Daily News uh

play11:50

article and it starts with hold on to

play11:51

your keyboard folks so it's you can you

play11:54

can rate these articles in different

play11:55

styles and then generate tweets um with

play11:58

emojis that sort of give you a whole

play12:00

sense that this is real and trending and

play12:02

of course generate fake

play12:05

audio oops can you play the uh audio

play12:08

track

play12:09

please shoot one

play12:11

second they can turn on the audio this

play12:14

next one should

play12:19

work no okay well it's a uh example of

play12:23

her voice basically saying to Mark

play12:25

Zuckerberg we have to not let people

play12:26

know about our about us n would be over

play12:29

I just can't have that constantly

play12:31

hanging over my

play12:33

head so uh and you know obviously

play12:36

generating fake images and then uh you

play12:38

can actually the same AI that can tell

play12:40

you why a meme is funny and do joke

play12:42

explanations can actually generate memes

play12:44

so this is a real meme generated by uh

play12:47

AI uh that people know and it says

play12:50

interview real people or make up stories

play12:52

so you can generate a whole universe of

play12:54

stuff that will then show up on Google

play12:56

petitions and so you're probably

play12:57

thinking when you see this example of a

play12:59

way to kind of alpha cancel people we

play13:01

know about Alpha go and Alpha chess but

play13:03

this is like Alpha cancel a Target

play13:06

person um so you're probably thinking

play13:08

that I'm here to tell you AI is going to

play13:09

use used to cancel people and that's the

play13:11

main thing we should be concerned about

play13:12

and the answer is no this is just one

play13:15

example of thousands of things that you

play13:18

can do when you combine these different

play13:20

capabilities

play13:21

together uh and we often talk about we

play13:25

want the promise of AI without the Peril

play13:27

of AI we want the benefits without the

play13:30

harms and the challenge is that can this

play13:33

can the technology that knows how to

play13:34

make cool AI art about humans be

play13:37

separated from the same technology that

play13:39

can create deep fake uh child porn

play13:42

they're part of the same image model can

play13:44

the technology that can give every kid

play13:46

in Africa a one-on-one biology tutor be

play13:50

separated from the AI model that can

play13:52

give every Isis terrorist a biological

play13:55

weapon tutor they're Inseparable they're

play13:57

all part of the same

play13:59

model and this example from a couple of

play14:02

years ago in which an AI that was used

play14:04

to discover less toxic drug drug

play14:06

compounds they then the researchers just

play14:08

flipped it and said I wonder if we could

play14:10

just literally flip the variable and

play14:13

search for more toxic drug compounds and

play14:15

in 6 hours it generated 40,000 toxic

play14:18

molecules including VX nerve

play14:21

gas and of course AI is not moving at

play14:24

just an exponential but a double

play14:25

exponential Pace because nukes don't

play14:28

make stronger nukes

play14:30

but AI can actually be used to make

play14:32

stronger AI so AI can be used for

play14:35

example by Nvidia to look at the chip

play14:37

design that trained Ai and say make

play14:39

those chips more efficient which it then

play14:41

does AI can be used to look at the code

play14:44

that makes Ai and say take that code and

play14:46

make it 50% more efficient and it can do

play14:49

that and so it's moving at such a small

play14:52

at such a fast pace you might think well

play14:54

at least there's lots of safety

play14:55

researchers that are that are working on

play14:57

this problem and there's actually

play14:59

currently a 30 to1 gap between people uh

play15:02

who are publishing papers and

play15:03

capabilities versus

play15:05

safety uh and there's per Stuart Russell

play15:07

what he said yesterday there's a

play15:09

thousand to one gap between the

play15:10

collective resources going into

play15:12

increasing AI capabilities versus those

play15:15

that are increasing

play15:21

safety so this is a lot and actually at

play15:25

this point in the presentation I would

play15:26

actually just encourage you if you want

play15:28

to just take a breath

play15:35

together we're all here because we care

play15:38

about which future we

play15:39

get everyone in this room wants the AI

play15:42

for

play15:42

good and we can still choose the future

play15:46

that we

play15:48

want but we have to actually see the

play15:51

risk clearly so we know the kinds of

play15:54

choices that we need to make to get to

play15:56

that future because no matter how High

play15:59

the the skyscraper of benefits that AI

play16:01

assembles if it can also be used to

play16:04

undermine the foundation of society upon

play16:06

which that skyscraper

play16:08

depends it won't matter how many

play16:10

benefits there

play16:11

are and to repeat if this is the problem

play16:15

statement that AI is like 21th Century

play16:17

technology crashing down on 20th century

play16:19

governance if you imagine 20th century

play16:22

technology crashing down on 16th century

play16:25

governance and the king is sitting there

play16:27

and suddenly smartphones and social

play16:29

media and Wi-Fi and radio and television

play16:32

all dumped on his Society at the same

play16:34

time and he assembles his advisers he

play16:38

doesn't have the governance tools to

play16:39

deal with those problems so the meta

play16:43

issue is not to focus on what's the one

play16:45

solution that's going to fix all of AI

play16:47

it's how do we get if we're spending

play16:49

trillions of dollars on increasing AI

play16:55

capabilities shouldn't we be spending 5%

play16:59

of that like $50 billion on getting all

play17:02

the governance of upgrading the

play17:04

governance

play17:05

itself you know

play17:07

democracy uh was invented with 17th

play17:09

Century Technologies Communications

play17:11

Technologies we had law we had the

play17:14

printing press uh and we used those

play17:17

institutions and those systems to invent

play17:18

the kind of governance that we had but

play17:20

now we have new 21st century tools

play17:22

you're probably thinking that sounds

play17:23

weird coming from him sounds like a

play17:25

techno Optimist but I think we need to

play17:27

be thinking about how do we use

play17:29

technology to upgrade the process of

play17:31

governance itself so it moves at the

play17:33

speed of technology and we could call

play17:35

this you know the upgrade governance

play17:37

plan what if for every $1 million that

play17:41

were spent on increasing AI

play17:44

capabilities AGI Labs had to spend a

play17:47

corresponding $1 million on actually

play17:49

going into safety and I'm sure many of

play17:52

you are tracking that the super

play17:53

alignment team at AI actually left

play17:56

recently out of I think many safety or

play17:58

oriented concerns so we need to get the

play18:01

safety right that means the Investments

play18:02

need to be right I think Stuart Russell

play18:04

said yesterday that for every 1 kilogram

play18:06

of weight of a nuclear power plant

play18:08

There's 7 kog of paperwork to sort of

play18:11

ensure that the nuclear power plant is

play18:14

safe and we could call that the AI

play18:16

safety plan and at CHT we're trying to

play18:19

map what are the other kinds of things

play18:20

that can change the incentives for AI

play18:23

deployment Stuart Russell yesterday

play18:24

talked about provably safe requirements

play18:27

that when model developers can prove

play18:29

that their AI model will not tell you

play18:30

how to create a biological weapon then

play18:32

they can release the model because we

play18:35

lack governance and good regulation

play18:36

right now that's adequate what if we

play18:39

protected whistleblowers so that the

play18:40

companies knew that the people who are

play18:42

closest to building it when they see the

play18:43

early warning signs what if they were

play18:45

protected in being able to share certain

play18:48

information to high level institutions

play18:50

to make sure that we could get that safe

play18:52

future what if developers of AI models

play18:55

were liable for the kinds of Downstream

play18:57

harms that occurred

play18:59

uh that would move the pace of release

play19:01

of AI models to a slow enough Pace that

play19:03

everyone would know I'm not going to

play19:04

release it I'm not going to be forced to

play19:06

release it as fast as everybody else

play19:08

because I know everyone has to go at the

play19:09

pace of being responsible for the things

play19:12

that you

play19:13

create and then of course we could

play19:15

actually think in very inspiring ways

play19:17

about how would we use AI to upgrade

play19:19

governance upgrade the the green line

play19:22

and we can imagine laws that actually

play19:24

are aware you could use AI to optimize

play19:27

uh laws to be saying how how do we look

play19:29

at all the laws that are getting

play19:30

outdated because the assumptions upon

play19:32

which the law was written have actually

play19:35

changed and AI could be used to

play19:36

accelerate those kinds of processes we

play19:38

could have ai systems that help uh

play19:40

negotiate treaties with zero knowledge

play19:42

proofs we can use 21st century

play19:44

technology to help upgrade our

play19:47

governance and so this is just a small

play19:49

sample this is not the solution to all

play19:51

the problems that I've laid out but I

play19:53

hope what I've provoked for you is that

play19:56

in this map are the kinds of we need to

play19:59

be thinking about to get to the Future

play20:01

that I know that we all care about so

play20:04

thank you very much

play20:09

[Music]

Rate This

5.0 / 5 (0 votes)

関連タグ
Artificial IntelligenceSocial MediaEthical TechHumane DesignGovernanceComplexity GapAI DilemmaFuture SocietyIncentive AlignmentTech Regulation
英語で要約が必要ですか?