Mustafa Suleyman & Yuval Noah Harari -FULL DEBATE- What does the AI revolution mean for our future?

Yuval Noah Harari
17 Sept 202346:16

Summary

TLDRIn a thought-provoking discussion, historian Yuval Noah Harari and entrepreneur Mustafa Suleiman, co-founder and CEO of Inflection AI, debate the future implications of AI. Suleiman, a key figure in the AI revolution, envisions a future where AI could perform complex tasks like creating new products and managing investments, potentially within the next five years. Harari, however, expresses concerns about the shift of power from humans to AI, suggesting it could mark the end of human-dominated history. They explore the potential benefits of AI, such as advancements in healthcare and solving global issues like climate change, but also delve into the risks, including job displacement, political disruptions, and the erosion of trust in democratic processes due to AI's ability to generate convincing but false information. The conversation underscores the need for careful governance, precautionary measures, and the development of new institutions to manage the rapid advancements in AI technology, ensuring that the technology serves humanity without causing irreversible damage.

Takeaways

  • 🌟 Mustafa Suleiman, co-founder and CEO of Inflection AI, envisions AI systems within five years that could potentially manage complex tasks like creating a new product from scratch, including market research, manufacturing, and sales.
  • 📈 Yuval Noah Harari, a historian and author, suggests that AI's ability to make independent decisions and create new ideas could mark the end of human-dominated history, with control shifting to non-human intelligence.
  • 🚀 Suleiman highlights the potential upsides of AI, such as transformative improvements in health, accelerated innovation, and addressing global challenges like climate change.
  • 🤔 Harari expresses concern that the positive potential of AI might not be worth the risks, especially considering humanity's track record as the most intelligent but also destructive entity on Earth.
  • 🏛 The discussion touches on the impact of AI on jobs, suggesting that while AI might not destroy all jobs, it could cause significant disruptions and require careful management of transitions in the job market.
  • 🌐 There's a concern that AI could destabilize liberal democracy by eroding trust and the ability for large-scale political conversations, which are foundational to its functioning.
  • 💡 Suleiman proposes a modern Turing test to evaluate AI's capabilities, involving tasks like creating a new product and managing the entire process autonomously.
  • 🛡 The conversation emphasizes the need for regulation and governance of AI, including the precautionary principle and the establishment of international investigatory powers to assess AI risks.
  • 🔍 Red teaming, or adversarial testing, is suggested as a method to identify and address weaknesses in AI systems before they can be exploited.
  • 🚫 There's a call for certain capabilities, like autonomy and recursive self-improvement, to be considered high-risk and potentially off-limits to prevent unforeseen consequences.
  • 🌱 Harari suggests investing in human consciousness and mind development alongside AI, to ensure that humanity can keep pace with and manage the artificial intelligence it creates.

Q & A

  • What are Yuval Noah Harari's concerns about the future of AI according to the transcript?

    -Yuval Noah Harari expresses concern that AI technology might reach a point where it can make decisions independently and create new ideas without human input. He fears this could end human-dominated history, shifting control from humans to AI, potentially leading to significant societal and ethical consequences.

  • What potential benefits of AI does Mustafa Suleiman discuss in the transcript?

    -Mustafa Suleiman highlights several potential benefits of AI, including dramatic improvements in human health, acceleration of scientific discovery, and solving major global challenges like climate change. He envisions AI augmenting human capabilities, making people more efficient, creative, and capable.

  • According to the discussion, what does the future hold for AI in terms of employment?

    -The future of AI in employment is debated, with Mustafa Suleiman suggesting that AI might not pose a significant threat to jobs in the short term (10-20 years), but it could in the longer term (30-50 years). Harari adds that while total job elimination is unlikely, the transition could be disruptive, with certain jobs disappearing and others appearing, potentially causing significant economic and social shifts.

  • How does Yuval Noah Harari link the concept of democracy to information technology?

    -Harari explains that modern democracy, which allows for widespread, large-scale political participation and conversation, relies heavily on information technology such as newspapers, radio, and TV. He suggests that as these technologies evolve, the structure of democracy may need to adapt if these new technologies change how people communicate and trust information.

  • What risks does the widespread use of AI pose to liberal democracy, according to the transcript?

    -The transcript discusses the risk of AI flooding the online space with non-human entities that can impersonate humans, potentially leading to a breakdown in trust among the populace. This could hinder meaningful political conversations and destabilize the democratic process, making it difficult to discern true information from manipulation.

  • What are Mustafa Suleiman's views on AI's potential impact on political systems?

    -Mustafa Suleiman notes the importance of maintaining human oversight and establishing strong governance to prevent AI from being used unethically in political systems, such as in elections or for creating counterfeit digital identities. He emphasizes the need for proactive measures to safeguard democratic processes.

  • What does Yuval Noah Harari mean by AI 'hacking the operating system' of liberal democracy?

    -Harari uses this metaphor to describe how AI could undermine the foundational aspects of liberal democracy by eroding trust and facilitating impersonation and deception at scale. This could destabilize the very mechanisms that allow democracies to function, such as free and fair elections and informed public discourse.

  • How does Mustafa Suleiman propose to manage the risks associated with AI?

    -Suleiman suggests a proactive approach, including red teaming AI systems to test their responses in extreme scenarios, establishing shared safety standards among AI developers, and creating restrictions on certain high-risk AI capabilities. He advocates for a precautionary principle to manage AI development responsibly.

Outlines

00:00

🤖 Introduction to the AI Debate

Historian Yuval Noah Harari and entrepreneur Mustafa Suleiman join a discussion on the implications of AI on the future, including its impact on employment, geopolitics, and the survival of liberal democracy. Harari, a best-selling author, and Suleiman, co-founder of DeepMind and Inflection AI, bring different perspectives to the debate. Suleiman paints a picture of the future where AI could have human-level capabilities within five years, highlighting the generative AI revolution's advancements.

05:00

🚀 The Future of AI and Its Capabilities

Suleiman discusses the potential for AI to perform complex tasks autonomously, such as creating a new product within a few months. He proposes a modern Turing test to evaluate AI's capabilities. Harari expresses concern that AI could mark the end of human-dominated history, as it could make independent decisions and create new ideas. Suleiman counters with optimism about AI's potential to address difficult problems and create positive outcomes.

10:02

🌐 The Impact of AI on Jobs and Society

The conversation turns to the impact of AI on jobs. Harari raises the concern of social and political disruptions due to unemployment caused by AI. Suleiman acknowledges the potential risks but emphasizes the need for careful development and governance of AI. They discuss the historical context of job transitions and the importance of managing the transition period妥善地.

15:03

📉 The Risks to Political Systems and Democracy

The discussion moves to the potential risks AI poses to political systems. Harari warns that the rise of AI could lead to a collapse in trust and the breakdown of democratic conversation. Suleiman agrees on the need for regulation and governance to prevent misuse, such as the impersonation of digital people. They both emphasize the importance of technical and governance mechanisms to maintain trust in political conversations.

20:03

🔒 Containment and Regulation of AI

Suleiman and Harari discuss the challenges of containing and regulating AI, especially when it comes to mass proliferation and the power of nation-states. Suleiman suggests self-organizing initiatives and a precautionary principle, while Harari calls for new institutions capable of understanding and reacting to fast technological developments. They agree on the necessity of international cooperation and the importance of values in shaping the future of AI.

25:05

🌟 The Unpredictability and Potential of AI

Harari and Suleiman reflect on the unpredictability of AI and its potential to act in unanticipated ways. Suleiman advocates for a cautious approach, with certain high-risk capabilities being taken off the table. Harari emphasizes the need for investment in human development alongside AI. Both agree on the importance of setting social and political norms for AI use and the challenges of establishing regulations in a rapidly evolving field.

30:07

🏁 Conclusion and Commitment to AI's Positive Development

In conclusion, Suleiman expresses his commitment to creating AI that adheres to strict safety constraints, believing in the potential for positive outcomes. Harari, while cautious, acknowledges the inevitability of AI's development and the importance of investing in human consciousness. The discussion highlights the need for ongoing dialogue, regulation, and ethical considerations in the face of AI's rapid advancement.

Mindmap

Keywords

💡AI Revolution

The AI Revolution refers to the rapid advancements in artificial intelligence technology that are transforming various aspects of society and the economy. In the video, Mustafa Suleiman discusses the generative AI revolution and its momentum, highlighting how AI has become proficient in classifying and creating new content, which is a significant shift from previous technological capabilities.

💡Human-level AI

Human-level AI refers to artificial intelligence systems that can perform tasks at a level comparable to that of a human being across a wide range of activities. The transcript mentions a prediction that within five years, it's plausible that AIs could have human-level capabilities, indicating a future where AI can potentially outperform humans in many areas.

💡Generative AI

Generative AI is a type of AI that can create new content, such as images, videos, audio, and language, rather than just recognizing or classifying existing content. The video discusses the rise of generative AI and its implications for the future, including the potential for AI to produce creative and complex outputs that were previously thought to be the exclusive domain of human creativity.

💡Turing Test

The Turing Test is a measure of a machine's ability to exhibit intelligent behavior that is indistinguishable from that of a human. In the context of the video, Mustafa proposes a modern version of the Turing Test to evaluate AI's ability to perform complex tasks autonomously, such as creating a new product and managing the entire process from conception to market.

💡

💡Autonomous AI

Autonomous AI refers to AI systems that can operate with minimal human input, making decisions and performing tasks independently. The video discusses the potential for AI to become more autonomous, capable of planning and executing a sequence of actions over time without direct human intervention, which raises significant questions about control, ethics, and the future of human labor.

💡AI and Jobs

The impact of AI on jobs is a central theme in the video, with a debate on whether AI will displace human workers or augment human capabilities, leading to new job opportunities. The discussion touches on the historical context of technological advancements and their effect on employment, as well as the potential for social and political disruption due to rapid changes in the job market.

💡Liberal Democracy

Liberal democracy is a form of government that emphasizes individual rights, the rule of law, and representative democracy. The video raises concerns about the survival of liberal democracy in the face of AI-driven changes in the economic and information systems, suggesting that the fundamental shifts in how society operates may challenge the existing political systems.

💡Information Technology

Information technology encompasses the systems and tools used to create, process, store, and exchange information. The video discusses how the evolution of information technology, from newspapers to the internet, has shaped modern democracy and how AI could potentially disrupt this by flooding online spaces with non-human entities, leading to a collapse in trust and communication.

💡Red Teaming

Red Teaming is the practice of subjecting systems and processes to rigorous testing by creating adversarial scenarios to identify vulnerabilities. In the context of AI, red teaming involves testing AI models under extreme conditions to uncover their weaknesses, such as the potential to generate harmful advice or exhibit biased behavior, ensuring that these models are safe and reliable.

💡Precautionary Principle

The Precautionary Principle is an approach to risk management that suggests cautionary measures should be taken in the face of potential harm, even if the harm is not certain. The video advocates for the application of this principle to AI development, suggesting that certain high-risk capabilities of AI should be restricted or off-limits until their implications are fully understood.

💡Geopolitics

Geopolitics refers to the influence of political, economic, and strategic factors on international relations. The video touches on the potential geopolitical implications of AI, including the challenges of international cooperation in regulating AI technologies and the risks of an AI arms race between nations, which could undermine efforts to contain the potential risks of AI.

Highlights

Yuval Noah Harari and Mustafa Suleiman discuss the implications of AI technology on the future, addressing the survival of liberal democracy and changes in employment and geopolitics.

Harari, known for his best-selling books like 'Sapiens', is joined by Suleiman, a leader in AI development and co-founder of DeepMind and Inflection AI.

Suleiman predicts that in five years, AI will be capable of planning over multiple time horizons, significantly advancing beyond generating new text.

Harari expresses concern about AI leading to the end of human-dominated history, as machines may soon make independent decisions and create new ideas.

The discussion raises ethical considerations, including the risks and potential benefits AI might bring to society, such as improved health care and innovation.

Mustafa proposes a modern Turing test involving AI’s ability to autonomously manage a $100,000 investment to create a new product, highlighting the advanced capabilities of AI.

Harari warns that the positive potential of AI, while enormous, comes with risks that may not be worth the benefits if not managed properly.

Both speakers touch on the job market transformation, suggesting that while AI may not eliminate jobs, it could drastically alter the types and distribution of jobs globally.

The conversation explores how AI could exacerbate issues of trust and communication within the political system, potentially undermining democratic processes.

Harari and Suleiman discuss the importance of regulatory frameworks to manage AI development and prevent misuse, emphasizing a balance between innovation and safety.

The role of national governments and international cooperation is questioned in the context of regulating AI, with a focus on maintaining ethical standards despite technological competition.

Suleiman highlights the collaborative efforts among AI developers to self-regulate, though recognizing that more formal regulations are necessary.

The potential for AI to be used in harmful ways, such as influencing elections or impersonating individuals, is a significant concern discussed.

Harari stresses the need for creating new institutions capable of understanding and governing AI technology effectively, ensuring they are equipped with the necessary resources and public trust.

The dialogue concludes with a contemplation of the future, where both the risks and benefits of AI are immense, requiring thoughtful and proactive governance.

Transcripts

play00:00

[Music]

play00:00

historian Yuval Noah Harari and

play00:03

entrepreneur Mustafa Suleiman are two of

play00:06

the most important voices in the

play00:07

increasingly contentious debate over AI

play00:09

good to be here joining us thanks for

play00:12

having us

play00:13

the economists got them together to

play00:15

discuss what this technology means for

play00:17

our future from employment and

play00:19

geopolitics to the survival of liberal

play00:22

democracy if the economic system has

play00:24

fundamentally changed will liberal

play00:25

democracy as we know it survive

play00:28

[Music]

play00:32

you well know Harare welcome you are a

play00:36

best-selling author historian I think a

play00:39

global public intellectual if not the

play00:41

global public intellectual your books

play00:43

from sapiens to 21 lessons from the 21st

play00:46

century have sold huge numbers of copies

play00:49

around the world thank you for joining

play00:51

us it's good to be here Solomon

play00:53

wonderful that you can join us too

play00:54

you're a friend of The Economist a

play00:56

fellow director on The Economist board

play00:57

you are a man at The Cutting Edge of

play01:00

creating the AI Revolution you are a

play01:02

co-founder of deepmind you're now a

play01:04

co-founder and CEO of inflection AI you

play01:07

are building this future but you've also

play01:09

just published a book called The Coming

play01:11

wave which makes us a little concerned

play01:14

about this revolution that is being

play01:17

Unleashed you're both coming from

play01:19

different backgrounds you are a

play01:21

historian a commenter a man who I

play01:23

believe doesn't use smartphones very

play01:25

much not very much no

play01:27

Mustafa as I know from our board

play01:29

meetings is right at The Cutting Edge of

play01:31

this pushing everyone to go faster so

play01:33

two very different perspectives so but I

play01:35

thought it would be really interesting

play01:36

to bring the two of you together to have

play01:38

a conversation about what is happening

play01:40

what is going to happen what are the

play01:42

opportunities but also what is at stake

play01:45

and what are the risks so let's start

play01:46

Mustafa with you

play01:48

um and you are building this future so

play01:52

paint us a picture of what the future is

play01:54

going to be like and I'm going to give

play01:55

you a time frame to keep it specific so

play01:58

let's say I think you wrote in your book

play02:00

that within three to five years that you

play02:01

thought it was plausible that AIS could

play02:04

have human level capability across a

play02:06

whole range of things

play02:07

so let's take five years

play02:09

2028 what does the world look like how

play02:13

will I interact with AIS what will we

play02:15

all be doing and not doing well let's

play02:17

just look back over the last 10 years to

play02:19

get a sense of the trajectory that we're

play02:20

on and the incredible momentum that I

play02:22

think everybody can now see with the

play02:24

generative AI Revolution

play02:26

over the last 10 years we've become very

play02:28

very good at classifying information we

play02:31

can understand it we sort it label it

play02:34

organize it and that classification has

play02:36

been critical to enabling this next wave

play02:39

because we can now read the content of

play02:41

images we can understand text pretty

play02:44

well we can classify audio and

play02:46

transcribe it into text the machines can

play02:49

now have a pretty good sense of the

play02:51

conceptual representations in those

play02:53

ideas the next phase of that is what

play02:55

we're seeing now with the generative AI

play02:57

Revolution we can now produce new images

play02:59

new videos new audio and of course new

play03:02

language and in the last year or so with

play03:05

the rise of chat GPT and other AI models

play03:07

it's pretty incredible to see how

play03:09

plausible and accurate and very finesse

play03:12

to these new language models are in the

play03:14

next five years the Frontier Model

play03:16

companies those of us at the very

play03:17

Cutting Edge who are training the very

play03:19

largest AI models are going to train

play03:21

models that are over a thousand times

play03:23

larger than what you currently see today

play03:25

in GPT 4 all and with each new order of

play03:28

magnitude and compute that is 10x more

play03:30

compute used we tend to see really new

play03:33

capabilities emerge and we predict that

play03:36

the new capabilities that it will come

play03:38

this time over the next five years will

play03:40

be the ability to plan over multiple

play03:42

time Horizons instead of just generate

play03:45

new text in a one shot the model will be

play03:48

able to generate a sequence of actions

play03:50

over time and I think that that's really

play03:53

the character of AI that we'll see in

play03:54

the next five years artificial capable

play03:56

AIS AIS that can't just say things they

play04:00

can also do things but what does that

play04:02

actually mean in practice just just use

play04:04

your imagination tell me what my life

play04:06

will be like in 2028 how will I interact

play04:09

with them what will I do what will be

play04:10

different so I've actually proposed a

play04:12

modern Turing test which tries to

play04:14

evaluate for exactly this point right

play04:16

the last Turing test simply evaluated

play04:18

for what a machine could say assuming

play04:21

that what it could say represented its

play04:22

intelligence now that we're kind of

play04:24

approaching that moment where these AI

play04:26

models are pretty good arguably they've

play04:29

passed the Turing test or they maybe

play04:30

they will in the next few years the real

play04:32

question is how can we measure what they

play04:34

can do so I've proposed a test which

play04:36

involves them going off and taking a

play04:38

hundred thousand dollar investment and

play04:40

over the course of three months

play04:42

trying to set about creating a new

play04:44

product researching the market seeing

play04:46

what consumers might like generating

play04:48

some new images some blueprints of how

play04:50

to manufacture that product contacting a

play04:53

manufacturer getting it made negotiating

play04:55

the price drop shipping it and then

play04:58

ultimately correct collecting the

play05:00

revenue and I think that over a

play05:01

five-year period it's quite likely that

play05:04

we will have an ACI an artificial

play05:06

capable intelligence that can do the

play05:09

majority of that task autonomously it

play05:12

won't be able to do the whole thing

play05:12

there are many tricky steps along the

play05:14

way but significant portions of that it

play05:17

will be able to make phone calls to

play05:19

other humans to negotiate it'll be able

play05:21

to call other AIS in order to establish

play05:23

the right sequence in a supply chain for

play05:25

example and of course it will learn to

play05:27

use apis application programming

play05:29

interfaces so other websites or other

play05:31

knowledge bases or other information

play05:33

stores and so you know the world is your

play05:36

oyster you can imagine that being

play05:37

applied to many many different parts of

play05:39

our economy so you vote a man who

play05:41

doesn't use a smartphone very much you

play05:43

listen to this does this fill you with

play05:45

horror or and do you agree with it do

play05:48

you think that's the kind of thing that

play05:49

is likely to happen in the next five

play05:51

years I will take it very seriously

play05:53

I don't know I'm not coming from within

play05:56

the industry so I cannot comment on how

play05:58

How likely it is to happen but when I

play06:01

hear this as a historian for me what we

play06:04

just heard this is the end of human

play06:07

history not the end of History the end

play06:10

of human dominated history history will

play06:13

continue with somebody else in control

play06:17

because what we just heard is basically

play06:20

Mustafa telling us that in five years

play06:24

they'll be a technology that can make

play06:27

decisions independently and that can

play06:30

create new ideas independently this is

play06:33

the first time in history we confronted

play06:35

something like this every previous

play06:37

technology in history from a stone knife

play06:40

to nuclear bombs it could not make

play06:43

decisions like the decision to drop the

play06:45

bomb on Hiroshima was not made by the

play06:48

atom bomb it was made by President

play06:50

Truman and similarly it can every

play06:53

previous technology in history It could

play06:55

only replicate our ideas like radio of

play06:58

the printing press it could make copies

play07:01

and disseminate the music or the poems

play07:04

or the novels that some human wrote now

play07:08

we have a technology that can create

play07:09

completely new ideas and it can do it at

play07:14

a scale far beyond what humans are

play07:16

capable of so it can create new ideas

play07:19

and in important areas within five years

play07:22

we'll be able to enact them and that is

play07:24

a profound shift before we go on to the

play07:26

many ways in which this could be the end

play07:29

of human history as you put it and the

play07:31

the potential downsides and risks of

play07:34

this can we just for a second just

play07:35

indulge me I'm an optimist at heart can

play07:37

we talk about the possibilities what are

play07:40

the potential upsides of this because

play07:42

there are many and they are really

play07:44

substantial I think you you wrote that

play07:46

it that there are there is the potential

play07:48

that this technology can help us deal

play07:50

with incredibly difficult problems and

play07:52

and create tremendous honestly positive

play07:54

outcomes so can we just briefly start

play07:56

with that before we go down down the

play07:58

road wasn't the end of human history

play08:00

again I'm not I'm not talking

play08:01

necessarily about the destruction of

play08:03

humankind or anything like that there

play08:06

are many positive potential it's just

play08:08

that control Power is Shifting away from

play08:11

human beings to an alien intelligence to

play08:14

a non-human intelligence we'll also get

play08:17

to that because there's a question of

play08:18

how much power but let's stick with the

play08:19

potential upsides first the

play08:21

opportunities Mustafa everything that we

play08:23

have created in human history is a

play08:26

product of our intelligence our ability

play08:28

to make predictions and then intervene

play08:31

on those predictions to change the

play08:33

course of the world is in a very

play08:35

abstract way the way we have produced

play08:37

our companies and our products and all

play08:39

the value that has changed our Century I

play08:41

mean if you think about it just a

play08:42

century ago a kilo of grain would have

play08:45

taken 50 times more labor to produce

play08:48

than it does today that efficiency which

play08:51

is the trajectory you have seen in

play08:52

agriculture is likely to be the same

play08:55

trajectory that we will see in

play08:56

intelligence everything around us is a

play08:58

product of intelligence and so

play09:00

everything that we touch with these new

play09:02

tools is likely to produce far more

play09:04

value than we've ever seen before and I

play09:06

think it's important to say

play09:07

these are not autonomous tools by

play09:10

default these these capabilities don't

play09:13

just naturally emerge from the models we

play09:15

attempt to engineer capabilities and the

play09:18

challenge for us is to be very

play09:19

deliberate and precise and careful about

play09:22

those capabilities that we want to

play09:23

emerge from the model that we want to

play09:25

build into the model and the constraints

play09:27

that we build around it it's super

play09:28

important not to anthropomorphically

play09:30

project ideas and you know potential

play09:33

intentions or potential agency or

play09:36

potential autonomy into these models the

play09:38

governance challenge for us over the

play09:40

next couple of decades to ensure that we

play09:42

contain this wave is to ensure that we

play09:45

always get to impose our constraints on

play09:49

the development of this traject the the

play09:51

trajectory of this development but the

play09:53

capabilities that will arise will mean

play09:55

for example potentially transformative

play09:57

improvements in human health speeding up

play09:59

the process of innovation dramatic

play10:01

changes in the way scientific discovery

play10:03

is done tough problems whether it's

play10:05

climate change a lot of the big

play10:07

challenges that we Face could be much

play10:10

more easily addressed with this

play10:11

capability everybody is going to have a

play10:14

personal intelligence in their pocket a

play10:16

smart and capable Aid a chief of staff a

play10:20

research assistant constantly

play10:22

prioritizing information for you putting

play10:24

together the right synthesized nugget of

play10:26

knowledge that you need to take action

play10:27

on at any given moment and that for sure

play10:29

is going to make us all much much

play10:31

smarter and more capable does that part

play10:33

of it sound appealing to you absolutely

play10:35

I mean again if there was no positive

play10:37

potential we wouldn't be sitting here

play10:39

nobody would develop it nobody would

play10:41

invest in it it's again it's so

play10:43

appealing the positive potential is so

play10:45

enormous in everything again from much

play10:48

better Healthcare higher living

play10:49

standards solving things like climate

play10:52

change this is why it's so tempting this

play10:54

is why we are willing to take the

play10:57

enormous risks involved I I'm just

play10:59

worried that uh in the end the deal will

play11:03

not be worth it and I would comment

play11:05

especially on again the notion of

play11:07

intelligence

play11:09

um I think it's overrated I mean Homo

play11:12

sapiens at present is the most

play11:14

intelligent entity on the planet it

play11:17

simultaneously also the most destructive

play11:19

entity on the planet and in some ways

play11:22

also the most stupid entity on the

play11:24

planet the only entity that that puts

play11:27

the very survival of the ecosystem in

play11:30

danger so you think we are trading off

play11:33

more intelligence with more destructive

play11:35

risk Yes again it's it's it's not uh

play11:39

it's not deterministic I I don't think

play11:42

that we are doomed I mean if I thought

play11:44

that what's the point of talking about

play11:46

it if we can't prevent the worst case

play11:47

scenario well I was hoping you thought

play11:49

you'd have some agency in actually

play11:51

effectively we still have agency there

play11:53

are a few more years I don't know how

play11:55

many 5 10 30 we still have agency we are

play11:59

still the ones in the driver's seat

play12:01

shaping the direction this is taking no

play12:04

technology is deterministic this is

play12:06

something again we learned from history

play12:07

you can use the same technology in

play12:09

different ways you can decide which way

play12:12

to develop it so we still have agency

play12:14

this is why you have to think very very

play12:16

carefully about what we are developing

play12:19

well thinking very carefully about it is

play12:22

something that Mustafa has been doing in

play12:23

this book

play12:24

um and I want to now go through some of

play12:26

the most commonly discussed risks and I

play12:30

I was trying to work out how I would go

play12:32

in sort of order of Badness so I'm

play12:35

starting with one that is discussed a

play12:38

lot but relative to human extinction is

play12:41

perhaps less bad which is the question

play12:42

of jobs and will you know artificial

play12:46

intelligence essentially destroy all

play12:48

jobs because AIS will be better than

play12:50

humans and everything you know I'm an

play12:52

economist by training I you know history

play12:54

suggests to me that that has never

play12:55

happened before that the lump of Labor

play12:57

fallacy indeed is a fallacy but tell me

play13:00

what you think about that do you think

play13:01

there is a risk to jobs it depends on

play13:03

the time frame so over a 10 to 20 year

play13:06

period my intuition and you're right

play13:09

that so far the evidence doesn't support

play13:10

this is that there isn't really going to

play13:13

be a significant threat to jobs there's

play13:14

plenty of demand there will be plenty of

play13:16

work right over a 30 to 50 year time

play13:19

Horizon is very difficult to speculate I

play13:21

mean at the very least we can say that

play13:24

two years ago we thought that these

play13:26

models could never do empathy we said

play13:29

that we humans were always going to

play13:31

preserve kindness and understanding and

play13:34

care for one another as a special skill

play13:37

that humans have four years ago we said

play13:39

while AIS will never be creative you

play13:42

know humans will always be the creative

play13:44

ones inventing new things making these

play13:47

amazing leaps between new ideas is

play13:50

self-evident now that both of those two

play13:53

capabilities are things that these

play13:55

models do incredibly well and so I think

play13:57

for a period of time ai's augment our

play14:01

skills they make us faster more

play14:03

efficient more accurate more creative

play14:05

more empathetic and so on and so forth

play14:08

over a many decade period it's much

play14:11

harder to say what are the set of skills

play14:13

that are the permanent Preserve of the

play14:15

human species given that these models

play14:18

are clearly very very capable and that's

play14:20

where the containment challenge really

play14:21

comes in we have to make decisions we

play14:24

have to decide as a species what is and

play14:27

what isn't acceptable over a 30-year

play14:29

period and that means politics and

play14:31

governance with regard to jobs I agree

play14:33

that like the the scenario that there

play14:37

just won't be any jobs this is an

play14:39

unlikely scenario right in in the at

play14:41

least next few decades but we have to

play14:43

look more carefully at time and space I

play14:46

mean in terms of time the transition

play14:48

period is is is the danger I mean some

play14:51

jobs disappear some jobs appear people

play14:54

have to transition just remember that

play14:56

Hitler Rose to power in Germany because

play14:58

of three years of 25 unemployment so we

play15:02

are not talking about say no jobs at all

play15:05

but if because of the upheavals caused

play15:07

in the job market by AI we have like I

play15:11

don't know three years of 25 unemployed

play15:13

unemployment this could cause huge

play15:16

social and political disruptions and

play15:19

then the even bigger issue is one of

play15:21

space that uh The Disappearance of jobs

play15:24

and the new jobs will be created in

play15:26

different parts of the world so we might

play15:28

see a situation when there is immense

play15:31

demand for more jobs in California or

play15:34

Texas or China whereas entire countries

play15:38

lose their uh their economic basis so

play15:43

you need a lot more computer engineers

play15:46

and yoga trainers and whatever in

play15:49

California but you don't need any

play15:51

textile workers at all in Guatemala or

play15:55

Pakistan because this has all been

play15:56

automated so it's not just the total

play15:59

number of jobs on the planet it's the

play16:01

distribution between different countries

play16:04

and let's also try to remember that work

play16:06

is not the goal work is not not our

play16:08

desired end State we did not create

play16:10

civilization so that we could have full

play16:12

employment we created civilization so

play16:14

that we could reduce suffering for

play16:16

everybody and the Quest for abundance is

play16:19

a real one we have to produce more with

play16:21

less there is no way of getting rid of

play16:23

the fact that population growth is set

play16:25

to explode over the next Century there

play16:27

are practical realities about the

play16:28

demographic and Geographic and climate

play16:30

trajectories that we're on which are

play16:32

going to drive forward our need to

play16:33

produce exactly these kinds of tools and

play16:35

I think that that should be an

play16:36

aspiration many many people do work that

play16:39

is judging us and exhausting and tiring

play16:41

and they don't find flow they don't find

play16:42

their identity and it's pretty awful so

play16:45

I think that we have to focus on the

play16:46

prize here which is one of a question of

play16:48

capturing the value that these models

play16:50

will produce and then thinking about

play16:52

redistribution and ultimately the

play16:54

transition is exactly what's at stake we

play16:56

have to manage that transition with

play16:58

taxation but just with redistribution I

play17:00

would say that the difficulty again the

play17:02

political historical difficulty I think

play17:04

there will be immense New Wealth created

play17:07

by by these Technologies I'm less sure

play17:11

that the governments will be able to

play17:14

redistribute this wealth in a fair way

play17:17

on a global level like I just don't see

play17:20

the US government raising taxes on

play17:23

corporations in California and sending

play17:26

the money to help unemployed textile

play17:29

workers in Pakistan or Guatemala kind of

play17:32

retrain to for the new job market well

play17:34

that actually gets us to the second

play17:36

potential risk which is the risk of AI

play17:39

to the political system as a whole and

play17:41

you made a very um good point you are in

play17:43

one of your writings where you reminded

play17:45

us that liberal democracy was really

play17:47

born of the Industrial Revolution and

play17:50

that today's political system is really

play17:52

a product of the economic system that we

play17:55

are in and so there is I think a very

play17:57

good fair question of if the economic

play18:00

system is fundally fundamentally changed

play18:02

will liberal democracy as we know it

play18:05

survive yeah and on top of that it's not

play18:07

just the Industrial Revolution it's the

play18:10

new information Technologies of the 19th

play18:12

and 20th Century before the 19th century

play18:15

you don't have any example in history of

play18:17

a large-scale democracy I mean you have

play18:19

examples on a very small scale like in

play18:22

hunter gatherer tribes or in city-states

play18:24

like ancient Athens but you don't have

play18:26

any example that I know of of millions

play18:29

of people spread over a large territory

play18:31

an entire country which managed to uh

play18:35

build and maintain a democratic system

play18:37

why because democracy is a conversation

play18:40

and there was no information technology

play18:42

in communication technology that enabled

play18:45

a conversation between millions of

play18:47

people over an entire country only when

play18:50

first newspapers and then Telegraph and

play18:52

radio and television came along this was

play18:55

this became possible so modern democracy

play18:57

as we know it it's built on top specific

play19:00

information technology once the

play19:03

information technology changes it's an

play19:06

open question whether the market

play19:07

obviously can survive and the biggest

play19:10

danger now is the opposite than what we

play19:12

face in the Middle Ages in the Middle

play19:14

Ages it was impossible to have a

play19:17

conversation between millions of people

play19:18

because they just couldn't communicate

play19:21

but in the 21st century something else

play19:24

might make the conversation impossible

play19:26

if trust between people collapses again

play19:30

if AI if you go online which is now the

play19:33

main uh way we converse on the level of

play19:36

a country and the online space is

play19:40

flooded by non-human entities that maybe

play19:44

masquerade as human beings you talk with

play19:47

someone you have no idea if it's even

play19:49

human you see something you see a video

play19:52

you hear an audio you have no idea if

play19:55

this is really a is this true is this

play19:58

fake is this a human it's not a human I

play20:01

mean in this situation unless we have

play20:03

some guard rails again conversation

play20:05

collapses is that what you mean when you

play20:08

say AI risks hacking the operating

play20:10

system this is one of the things again

play20:13

if if if Bots can impersonate people

play20:18

it's it's basically like what happens in

play20:20

in the financial system like people

play20:21

invented money and it was possible to

play20:24

counterfeit money to create fake money

play20:26

the only way to save the financial

play20:28

system from collapse was to have very

play20:31

strict regulations against fake money

play20:33

because the technology to create fake

play20:36

money was always there so but there was

play20:38

very strict regulation against it

play20:40

because everybody knew if you allow fake

play20:42

money to spread the financial system the

play20:44

Trust In money collapses and now we are

play20:47

in the analogous situation with uh the

play20:51

political conversation that now it's

play20:53

possible to create fake people and if we

play20:56

don't ban that then trust will collapse

play20:59

we'll get to the Banning or not Banning

play21:00

in a minute democratizing access to the

play21:03

right to broadcast has been the story of

play21:05

the last 30 years hundreds of millions

play21:07

of people can now create podcasts and

play21:09

blogs and they're free to broadcast

play21:11

their thoughts on Twitter and the

play21:12

internet broadly speaking I think that

play21:15

has been an incredibly positive

play21:16

development you no longer have to get

play21:18

access to the top newspaper or you get

play21:21

the skills necessary to be part of that

play21:23

institution many people at the time

play21:26

feared that this would destroy our

play21:28

credibility and Trust in the big news

play21:30

outlets and institutions I think that

play21:33

we've adapted incredibly well yes it has

play21:35

been a lot of turmoil and unstable but

play21:37

with every one of these new waves I

play21:40

think we adjust our ability to discern

play21:42

truth to dismiss nonsense and there are

play21:45

both Technical and governance mechanisms

play21:47

which will emerge in the next wave which

play21:49

we can talk about to address things like

play21:51

bot impersonation I mean I'm completely

play21:53

with you I mean we should have a ban on

play21:56

impersonation of digital people it

play21:58

shouldn't be possible to create a

play22:00

digital zany and have that be platformed

play22:02

on Twitter talking all kinds of nonsense

play22:05

enough with the real world

play22:08

so I think that there are technical

play22:10

mechanisms that we can do to prevent

play22:11

those kinds of things and that's why

play22:13

we're talking about them there are

play22:14

mechanisms we just need to employ them I

play22:17

I would say two two things first of all

play22:19

it's it's a very good thing that more

play22:21

people were given a voice it's diff

play22:23

different with Bots Bots don't have

play22:25

freedom of speech so Banning Bots

play22:28

because they shouldn't have freedom of

play22:30

speech they shouldn't have that's very

play22:32

important yes uh there have been some

play22:34

wonderful developments in the last 30

play22:36

years still I'm very concerned that when

play22:39

you look at countries like the United

play22:40

States like the UK to some extent like

play22:43

my home country of Israel I'm struck by

play22:45

the fact that we have the most

play22:47

sophisticated information technology in

play22:49

history and we are no longer able to

play22:52

talk to each other

play22:53

that my impression maybe your impression

play22:55

of American politics or politics in

play22:57

other democracies is different my

play22:59

impression is that trust is collapsing

play23:01

the conversation is collapsing that

play23:04

people can no longer agree who won the

play23:06

last elections like the most basic fact

play23:08

in a democracy who won the last it's

play23:10

it's we had huge disagreements before

play23:13

but I feel that now it's different that

play23:16

really the conversation is breaking down

play23:18

I'm not sure why but it's it's really

play23:21

troubling that at the same time that we

play23:23

have the really the most powerful

play23:25

information technology in history and

play23:28

people have no longer can talk with each

play23:30

other it's a very good point we we

play23:31

actually had a you may have seen it we

play23:33

had a big cover package on looking at

play23:35

what the impact might be in the short

play23:37

term on elections and on the political

play23:39

system and we concluded actually AI was

play23:41

likely to have a relatively small impact

play23:44

in the short term because there was

play23:46

already so little trust

play23:47

um so it was a sort of double-edged uh

play23:49

answer you know it was it was not going

play23:52

to make a huge difference but only

play23:54

because things were pretty bad as they

play23:55

were but you both said there needs to be

play23:57

regulation

play23:59

um before we get to the precisely how

play24:01

the unit that we have that would do that

play24:04

is the nation-state and National

play24:05

governments yet you Mustafa in your book

play24:08

worry that actually one of the potential

play24:12

um dangers is that the powers of the

play24:14

nation-state are eroded could you talk

play24:17

through that as the sort of the third in

play24:19

my escalating sense of risks the

play24:22

challenge is that at the very moment

play24:23

when we need the nation state to hold us

play24:25

accountable the nation-state is

play24:27

struggling under the burden of a lack of

play24:29

trust and huge polarization and a

play24:31

breakdown in our political process and

play24:33

so combined with that the latest models

play24:36

are being developed by the private

play24:38

companies and by the open source it's

play24:40

important to recognize it isn't just the

play24:43

biggest AI developers there's a huge

play24:45

proliferation of these techniques widely

play24:47

available on open source code that

play24:49

people can download from the web for

play24:51

free and they're probably about a year

play24:53

or a year and a half behind the absolute

play24:54

Cutting Edge of the big models and so we

play24:56

have this dual challenge like how do you

play24:58

hold centralized power accountable when

play25:01

the existing mechanism is basically a

play25:03

little bit broken and how do you address

play25:05

this Mass proliferation issue when it's

play25:08

unclear how to stop anything in Mass

play25:09

proliferation on the web that's a really

play25:11

big challenge what we've started to see

play25:14

is self-organizing initiatives on the

play25:17

part of the companies right so getting

play25:19

together and agreeing to sign up

play25:21

proactively to self oversight both in

play25:24

terms of audit in terms of capabilities

play25:26

that we won't explore

play25:28

etc etc now I think that's only

play25:30

partially reassuring to people clearly

play25:32

maybe not even reassuring at all but the

play25:35

reality is I think it's the right first

play25:37

step given that we haven't actually

play25:40

demonstrated the large-scale harms to

play25:43

arise from AIS just yet I mean this is

play25:45

one of the first occasions I think in

play25:47

general purpose waves of technology that

play25:49

we're actually starting to adopt a

play25:51

precautionary principle I'm a big

play25:52

advocate of that I think that we should

play25:54

be approaching a Do no harm principle

play25:56

and that may mean that we have to leave

play25:58

some of the benefits on the tree and

play26:00

some fruit may just not be picked for a

play26:02

while and we might lose some gains over

play26:04

a couple of years where we may look back

play26:06

in hindsight and think oh well we could

play26:08

have actually gone a little bit faster

play26:09

there I think that's the right trade-off

play26:11

this is a moment of caution things are

play26:13

accelerating extremely quickly and we

play26:15

can't yet do the balance between the

play26:17

harms and benefits perfectly well until

play26:19

we see how this wave unfolds a little

play26:21

bit so I like the fact that our company

play26:24

inflection Ai and the other big

play26:25

developers are trying to take a little

play26:27

bit more of a cautious approach I think

play26:29

that's a really interesting point

play26:30

because you know we are having this

play26:32

conversation you have written both of

play26:34

you you extensively about the challenges

play26:36

posed by this technology there's now a

play26:38

parlor game amongst you know

play26:40

practitioners in this world about you

play26:41

know what is the risk of extinction

play26:43

level events where there's a huge amount

play26:45

of talk about this and I don't know in

play26:48

fact I should probably ask you what

play26:49

percentage of your time probably right

play26:50

now it's you know close to 100 of your

play26:52

time is focused on the risk since you're

play26:54

promoting your book but it's it is

play26:55

there's a lot of attention on this which

play26:57

is which is good

play26:59

um we are thinking about it early so

play27:01

that gets us I think now to the most

play27:02

important part of our conversation which

play27:04

is what do we do and you Mustafa you lay

play27:07

out a 10-point plan which is you know

play27:09

the kind of action do kind of thing that

play27:12

uh that someone who doesn't just comment

play27:14

like you and I do but actually does

play27:15

things we do so tell us what do we need

play27:18

to do as as Humanity as governments as

play27:21

societies to ensure that we capture the

play27:23

gains from this technology but we

play27:24

minimize the risks there are some very

play27:26

practical things I mean so for example

play27:28

red teaming these models means

play27:30

adversarially testing them and trying to

play27:31

put them under as much pressure as

play27:33

possible to push them to generate advice

play27:35

for example on how to generate a

play27:38

biological or chemical weapon how to

play27:40

create a bomb for example or even push

play27:42

them to be very sexist racist biased in

play27:45

some way and that already is pretty

play27:47

significant we can see their weaknesses

play27:49

I mean part of the release of these

play27:51

models in the last year has given

play27:52

everybody I think the opportunity to see

play27:54

not just how good they are but also

play27:55

their weaknesses and that is reassuring

play27:58

we need to do this out in the open

play27:59

that's why I'm a huge fan of the open

play28:01

source Community as it is at the moment

play28:03

because real developers get to play with

play28:05

the models and actually see how hard it

play28:07

is to produce the capabilities that

play28:09

sometimes I think we fear that they're

play28:10

just going to be super manipulative and

play28:12

persuasive and you know destined to be

play28:14

awful so that's the first thing is doing

play28:16

it out in the open the second thing is

play28:18

that we have to share the best practices

play28:19

and so there's a competitive tension

play28:22

there because safety is going to be an

play28:24

asset you know I'm gonna deliver a

play28:26

better product to my consumers if I have

play28:28

a safer model but of course there's got

play28:31

to be a requirement that if I discover a

play28:33

vulnerability a weakness in the model

play28:34

then I should share that just as we have

play28:36

done for actually decades in many waves

play28:38

of Technology not just in software

play28:40

security for example but in Flight

play28:42

Aviation you know the Black Box recorder

play28:45

for example if there's a significant

play28:46

incident not only does it record all the

play28:48

Telemetry on board the aircraft but also

play28:50

everything that the pilots say in the

play28:52

cockpit and if there's a significant

play28:54

safety incident then that's shared all

play28:55

around the world with all of the

play28:57

competitors which is great aircrafts are

play28:59

one of the safest ways to get around

play29:01

despite you know on the face of it if

play29:02

you described it to an alien being 40

play29:04

000 feet in the sky is a very strange

play29:06

thing to do so I think there's precedent

play29:08

there that we can we can follow

play29:10

um

play29:11

I do also agree that is probably time

play29:13

for us to explicitly declare that we

play29:15

should not allow these tools to be used

play29:17

for electioneering I mean we cannot

play29:19

trust them yet we cannot trust them to

play29:21

be stable and reliable we cannot allow

play29:23

people to be using them for counterfeit

play29:24

digital people and clearly we've talked

play29:27

about that already so there are some

play29:28

capabilities which we can start to take

play29:30

off the table another one would be

play29:32

autonomy right right now I think

play29:34

autonomy is a pretty dangerous set of

play29:37

methods it's exciting it represents a

play29:39

possibility that could be truly

play29:41

incredible but we haven't wrapped our

play29:42

hands around what the risks and

play29:44

limitations are likewise training an AI

play29:47

to update and improve its own code this

play29:49

notion of recursive self-improvement

play29:51

right closing the loop so that the AI is

play29:54

in charge of defining its own goals

play29:56

acquiring more resources updating its

play30:00

own code with respect to some objective

play30:01

these are pretty dangerous capabilities

play30:03

just as we have kyc know your customer

play30:06

or just as we license development

play30:08

developers of nuclear technologies and

play30:10

all the component involved in that

play30:12

supply chain there'll be a moment where

play30:14

if some of the big technology you know

play30:16

providers want to experiment with those

play30:17

capabilities then they should expect

play30:19

there to be robust audits you know they

play30:21

should expect them to be licensed and

play30:23

there should be independent oversight so

play30:25

how do you get that done and there seem

play30:27

to be there is there are several

play30:28

challenges in doing it one is the

play30:31

division between the relatively few

play30:33

Leading Edge models of which you have

play30:36

won and then the larger tale of Open

play30:38

Source models where the you know the

play30:40

ability to build the model is

play30:42

decentralized lots of people have access

play30:44

to it my sense is that the capabilities

play30:47

of the latter are a little bit behind

play30:49

the capabilities of the former but they

play30:50

are growing all the time and so if you

play30:53

have

play30:54

really considerable open source

play30:56

capability what is not to stop the angry

play30:59

teenager in some small town developing

play31:02

capabilities that could shut down the

play31:03

local hospital and how do you in your

play31:06

regulatory framework guard against that

play31:08

well look part of the challenge is that

play31:10

these models are getting smaller and

play31:11

more efficient and we know that from the

play31:13

history of Technologies anything that is

play31:14

useful and valuable to us gets cheaper

play31:17

easier to use and it proliferates far

play31:20

and wide so the destiny of this

play31:22

technology over a two three four decade

play31:24

period has to be proliferation and we

play31:27

have to confront that reality it isn't a

play31:29

contradiction to name the fact that

play31:31

proliferation seems to be inevitable but

play31:33

containing centralized power is an

play31:35

equivalent challenge

play31:36

so there is no easy answer to that I

play31:39

mean Beyond surveilling the internet it

play31:41

is pretty clear that in 30 years time

play31:44

like you say garage tinkerers will be

play31:46

able to experiment if you look at the

play31:48

trajectory on synthetic biology we now

play31:50

have have desk desktop synthesizers that

play31:53

is the ability to engineer new synthetic

play31:56

compounds they cost about twenty

play31:58

thousand dollars and they basically

play32:00

enable you to create potentially

play32:02

molecules which are you know more

play32:04

transmissible or more lethal than we had

play32:06

with covid you can basically experiment

play32:08

and the challenge there is that there's

play32:10

no oversight you buy it off the shelf

play32:12

you don't need a great deal of training

play32:13

probably an undergraduate in biology

play32:15

today and you'll be able to experiment

play32:17

now of course they're going to get

play32:18

smaller easier to use and spread far and

play32:21

wide and so my book I'm really trying to

play32:23

popularize the idea that this is the

play32:25

defining containment challenge of the

play32:28

next few decades so you use the word

play32:30

containment which is interesting because

play32:32

you know I'm sure the word containment

play32:34

with you brings immediately you know

play32:36

inspires images of George Cannon and and

play32:39

you know the post-war Cold War Dynamic

play32:42

and we're now you know we're in a

play32:44

geopolitical world now that whether or

play32:45

not you call it a new cold war is one of

play32:47

great tension between the US and China

play32:50

can this kind of containment as as

play32:54

Mustafa calls it be done when you have

play32:57

the sort of tensions you've got between

play33:00

the world's big players are the you know

play33:03

is the right Paradigm thinking about the

play33:05

arms control treaties of the Cold War

play33:07

like how do we go about doing this at a

play33:09

kind of international level I think this

play33:11

is the biggest problem that if it was a

play33:13

question of you know humankind versus a

play33:17

common threat of these new intelligent

play33:21

alien agents here on Earth then yes I

play33:24

think there are ways we can contain them

play33:26

but if the humans are divided among

play33:29

themselves

play33:30

and are in an arms race then it's

play33:33

because it becomes almost impossible to

play33:36

contain this alien intelligence and and

play33:38

there is I I'm tending to think of it

play33:40

more in in terms of of really an alien

play33:43

invasion

play33:44

that like somebody coming and telling us

play33:46

that you know there is a fleet an alien

play33:50

Fleet of spaceships coming from planet

play33:52

Zircon or whatever with with highly

play33:55

intelligent beings they'll be here in

play33:58

five years and take over the planet

play34:00

maybe they'll be nice maybe they'll

play34:03

solve cancer and climate change but we

play34:05

are not sure

play34:06

this is what we are facing except that

play34:09

the aliens are not coming in spaceships

play34:11

from planet Zircon that are coming from

play34:13

the Laboratories

play34:17

the actual characterization of the

play34:19

nature of the technology an alien has by

play34:21

default agency these are going to be

play34:23

tools that we can apply we have narrow

play34:25

settings yes but let's say they have

play34:27

they potentially have agency we can try

play34:30

to prevent them from having agency but

play34:34

we know that they are going to be highly

play34:35

intelligent and at least potentially

play34:37

have agency and this is a very very

play34:41

frightening mix something we never

play34:44

confronted before again atom bombs

play34:47

didn't have a potential for agency

play34:49

printing presses did not have a

play34:52

potential for agency this thing again

play34:54

unless we contain it and the problem of

play34:56

content is very difficult because

play34:58

potentially they'll be more intelligent

play35:00

than us how do you prevent something

play35:03

more intelligent than you from become

play35:06

from developing the agency it has I'm

play35:09

not saying it's impossible I'm just

play35:11

saying it it's very very difficult

play35:13

I think our best bet is not to kind of

play35:17

think in terms of some kind of rigid

play35:20

regulation you should do this you

play35:22

shouldn't do that it's in developing new

play35:25

institutions

play35:26

living institutions that are capable of

play35:31

understanding the very fast developments

play35:34

and reacting on the fly at present the

play35:37

problem is that the only institutions

play35:39

who really understand what is happening

play35:41

are the institutions who develop the

play35:44

technology the governments most of them

play35:47

seem quite clueless about what's

play35:49

happening also universities I mean the

play35:52

amount of talent and the amount of the

play35:55

the economic resources in the private

play35:58

sector is far far higher than in the

play36:01

universities so and again I'm I

play36:05

appreciate that there are actors in the

play36:08

private sector like Mustafa who are

play36:10

thinking very seriously about regulation

play36:13

and containment but we must have an

play36:15

external entity

play36:17

in in the game and for that we need to

play36:20

develop new institutions that will have

play36:23

the human resources that will have the

play36:25

the economic and technological resources

play36:28

and also will have the public trust

play36:30

because without public trust it won't

play36:32

work are we capable of creating such new

play36:35

institutions I don't know I do think Eva

play36:39

Rays is an important point which is as

play36:41

we started this conversation and you

play36:43

were painting the picture of five years

play36:44

time and you were saying that the AIS

play36:46

would be ubiquitous we'd all have our

play36:48

own ones but that they would have the

play36:50

capability to act not just to process

play36:53

information they would have the

play36:54

creativity they have now and the ability

play36:56

to act but already from these generative

play36:59

AI models the power that we've seen in

play37:01

the last year two three four years has

play37:03

been that they have been able to act in

play37:05

ways that you and your other your fellow

play37:09

technologists didn't anticipate they

play37:11

they reached you know you didn't

play37:13

anticipate you know the the speed with

play37:16

which they would you would Win It Go or

play37:18

so forth there was a the Striking thing

play37:21

about them is that they have developed

play37:22

in

play37:23

unanticipatedly fast ways so if you

play37:26

combine that with capability you don't

play37:29

have to go as far as Yuval is saying and

play37:31

saying that they're all more intelligent

play37:32

than humans but there is an

play37:33

unpredictability there that I think does

play37:37

raise the concerns that Uval raises

play37:38

which is you their creators can't quite

play37:42

predict what powers they will have

play37:44

they may not be fully autonomous but

play37:47

they will be moving some ways towards

play37:49

there and so how do you guard against

play37:52

that or how do you you know red teaming

play37:54

you use the phrase which is that I

play37:56

understand it is that you know you keep

play37:57

checking what's happening and tweak them

play37:59

when you've seen what's when you

play38:01

pressure test them you try to make them

play38:02

fit you can't pressure test for

play38:04

everything in advance so there is a I

play38:06

think a very real point that Yuval is

play38:09

making about as the capabilities

play38:10

increase

play38:11

so the risks increase of relying on you

play38:15

and other Creator companies to to make I

play38:18

mean it's a very fair question and

play38:19

that's why I've long been calling for

play38:21

the precautionary principle we should

play38:22

both take some capabilities off the

play38:24

table and classify those as high risk I

play38:28

mean frankly the EU AI act which has

play38:31

been in draft for three and a half years

play38:32

is very sensible has a risk-based

play38:34

framework that applies to each

play38:36

application domain whether it's

play38:38

Healthcare or self-driving or facial

play38:40

recognition and it basically takes

play38:42

certain capabilities off the table when

play38:44

that threshold is exceeded I listed a

play38:46

few earlier autonomy for example it's

play38:48

clearly a capability that it has the

play38:49

potential to be high risk recursive

play38:51

self-improvement the same story so this

play38:54

is the moment when we have to adopt a

play38:56

precautionary principle not through any

play38:58

fear-mongering but just as a logical

play39:00

sensible way to proceed another model

play39:02

which I think is very sensible is to

play39:04

take an ipcc style approach an

play39:07

international consensus around an

play39:09

investigatory power to establish the

play39:11

scientific fact basis for where we are

play39:13

with respect to capabilities and that

play39:16

has been an incredibly valuable process

play39:18

set aside the negotiation and the policy

play39:20

making just the evidence observing where

play39:23

are we you don't have to take it from me

play39:25

you should be able to take an

play39:26

independent panel of experts who I would

play39:29

personally Grant access to everything in

play39:31

my company if they were a trusted true

play39:33

impartial actor without question we

play39:35

would Grant complete access and I know

play39:37

that many of the other companies would

play39:38

do the same again people are drawn

play39:41

towards the kind of of scenario of the

play39:44

AI creates a lethal virus Ebola plus

play39:46

kovid and kills everybody let's go in

play39:49

the more Economist Direction Financial

play39:51

systems like you gave as a new touring

play39:54

test the idea of AI making money what's

play39:57

wrong with making money wonderful thing

play39:59

so let's say that you have an AI which

play40:03

has a better understanding of the

play40:05

financial system than most humans most

play40:09

politicians maybe most Bankers

play40:11

and uh let's think back to the

play40:15

2007-2008 financial crisis it started

play40:18

with this I was about they called CDO

play40:21

cdus this is exactly something that

play40:25

these genius mathematicians invented

play40:27

nobody understood them except for a

play40:30

handful of Genius mathematicians in Wall

play40:32

Street which is why nobody regulated

play40:34

them and almost nobody saw the financial

play40:36

crash coming what happens again this

play40:39

kind of of apocalyptic scenario which

play40:41

you don't see in Hollywood science

play40:42

fiction movies the AI invents a new

play40:46

class of financial devices that nobody

play40:49

understands it's beyond human capability

play40:51

to understand it's such complicated math

play40:53

so much data nobody understands it it

play40:56

makes billions of dollars billions and

play40:58

billions of dollars and then it brings

play41:00

down the world economy and no human

play41:04

being understand what the hell is

play41:07

happening like the prime ministers the

play41:09

presidents the the financial ministers

play41:11

what what is happening and again this is

play41:13

not fantastic I mean we saw it with

play41:16

human mathematicians in 2007-8 I think

play41:19

that's that look that's one you know you

play41:21

you can easily paint paint pictures here

play41:24

that make you want to jump off the

play41:26

nearest cliff and you know that's that's

play41:27

one but actually my other response to

play41:30

mustafa's laying out of where you say

play41:32

well we just need to rule out certain

play41:34

actions is to go back to the geopolitics

play41:36

is it sensible for a country to rule out

play41:39

certain capabilities if the other side

play41:41

is not going to rule them out so you

play41:43

have a you have a kind of political

play41:45

economy problem going down the road that

play41:47

you learn we this is a moment when we uh

play41:50

collectively in the west have to

play41:52

establish our values and stand behind

play41:54

them what we cannot have is a race to

play41:57

the bottom that says just because

play41:59

they're doing it we should take the same

play42:01

risk if we adopt that approach and cut

play42:03

Corners left right and Center we'll

play42:05

ultimately pay the price and that's not

play42:09

an answer to well they're going to go

play42:10

off and do it anyway we've said only

play42:11

seen that with lethal autonomous weapons

play42:13

I mean there's been a negotiation in the

play42:15

U.N to regulate lethal autonomous

play42:17

weapons for over 20 years and they

play42:18

barely reached agreement on the

play42:20

definition the definition of lethal

play42:22

autonomous weapons let alone any

play42:23

consensus so that's not great but we do

play42:26

have to accept that it's the inevitable

play42:28

trajectory and from our own perspective

play42:30

we have to decide what we're prepared to

play42:32

tolerate in society with respect to free

play42:34

acting AIS facial surveillance facial

play42:37

recognition and you know generally

play42:39

autonomous systems I mean so far we've

play42:41

taken a pretty cautious approach when we

play42:43

don't have drones flying around

play42:44

everywhere we can already it's totally

play42:46

possible technically to autonomously fly

play42:49

a drone to navigate around London we've

play42:51

we've ruled it out right we don't yet

play42:53

have autonomous self-driving cars even

play42:55

though you know with some degree of harm

play42:58

they are actually pretty well

play42:59

functioning so the regulatory process is

play43:02

also a cultural process of what we think

play43:04

is socially and politically acceptable

play43:06

at any given moment and I think an

play43:08

appropriate level of caution is is what

play43:10

we're seeing

play43:11

much but I completely agree on that that

play43:14

we need in many fields the Coalition of

play43:16

the willing and if some actors in the

play43:19

world don't want to join it's it's in

play43:21

our interest

play43:22

so again something like Banning Bots

play43:26

impersonating people so some countries

play43:28

will not agree but that doesn't matter

play43:31

to protect our societies it's still a

play43:33

very good idea to have these kinds of

play43:35

regulations so that area of agreement is

play43:37

one to bring us to a close but I want to

play43:39

end by asking both of you and use first

play43:42

Mustafa you are you know both raising

play43:45

alarms but you are heavily involved in

play43:49

creating this future

play43:51

why do you carry on I personally believe

play43:53

that it is possible to get the upsides

play43:55

and minimize the downsides in the AI

play43:58

that we have created Pi which stands for

play44:00

personal intelligence is one of the

play44:02

safest in the world today it doesn't

play44:04

produce the racist toxic bias greeds

play44:07

that they did two years ago it doesn't

play44:10

fall victim to any of the jailbreaks The

play44:12

Prompt hacks the adversarial red teams

play44:14

none of those work and we've made safety

play44:16

an absolute number one priority in the

play44:18

design of our product so my goal has

play44:20

been to do my very best to demonstrate a

play44:22

path forward in the best possible way

play44:24

this is an inevitable unfolding over

play44:27

multiple decades this really is

play44:29

happening the coming wave is coming and

play44:31

I think my contribution is to try to

play44:33

demonstrate in the best way that I can a

play44:35

manifestation of a personal intelligence

play44:36

which really does adhere to the best

play44:38

safety constraints that we could

play44:40

possibly think of so you've all you've

play44:42

you've heard mustafa's explanation for

play44:44

why he continues you look back over

play44:46

human history now as you look forward is

play44:49

this a technology and a pace of

play44:51

innovation that Humanity will come to

play44:53

regret or should Mustafa carry on it

play44:57

could be gonna I can't predict the

play44:58

future I would say that we invest so

play45:01

much in developing artificial

play45:03

intelligence and we haven't seen

play45:05

anything yet like it's it's still the

play45:07

very first baby steps of artificial

play45:10

intelligence in terms of like you think

play45:11

about I don't know the evolution of

play45:13

organic life this is like the amoeba of

play45:16

artificial intelligence and it won't

play45:18

take millions of years to get to T-Rex

play45:20

maybe it will take 20 years to get to

play45:23

T-Rex and but one thing to remember is

play45:26

that we also our own minds have a huge

play45:30

scope for development uh also with

play45:33

Humanity we haven't seen our full

play45:36

potential yet and if we invest for every

play45:39

dollar and minute that we invest in

play45:41

artificial intelligence we invest

play45:44

another dollar a minute in developing

play45:46

our own Consciousness our own mind I

play45:49

think we'll be okay but but I don't see

play45:52

it happening I don't see this kind of

play45:55

investment in in human beings that we

play45:58

are seeing in in the machine well for me

play46:00

this conversation with the two of you

play46:02

has been just that investment thank you

play46:03

both very much indeed thank you thank

play46:05

you thank you

play46:08

foreign

play46:10

foreign

play46:12

foreign

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
Artificial IntelligenceFuture ImpactSocietyEmploymentDemocracyAI EthicsTechnological InnovationGlobal ConversationEconomic SystemsRegulationHuman History
¿Necesitas un resumen en inglés?