Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes

Intelligence Squared
4 Jul 202483:31

Summary

TLDRIn a thought-provoking discussion, Mustafa Suleyman, co-founder of DeepMind and Inflection AI, explores the transformative impact of generative AI models like Chat GPT on society. He delves into AI's potential to revolutionize various sectors, the ethical considerations of data training, and the importance of governance in steering AI's trajectory. Suleyman also addresses the existential risks of AI, advocating for a balanced view that acknowledges its benefits and challenges, while emphasizing the need for proactive and informed regulation to ensure AI's positive impact on humanity.

Takeaways

  • 📚 Mustafa Suleyman, co-founder of DeepMind and Inflection AI, discusses the transformative potential of AI in his book 'The Coming Wave', emphasizing its ability to change various aspects of life and society.
  • 🤖 AI's generative capabilities are leading a revolution where models can produce new content, such as images, text, and music, which is a significant leap from the previous focus on classification tasks.
  • 🚀 The advancement in AI is unprecedented, with computational power for cutting-edge models growing by 10x each year,预示着未来几年内AI的能力将实现千倍增长, potentially leading to AI that can plan and execute complex tasks.
  • 🌐 There is a global impact and interest in AI, with many concerned about the downsides, such as existential risks, while others, like Suleyman, offer a compelling view of the positive potential of AI.
  • 🧩 The development of personal AI, like Inflection AI's 'Pi', aims to provide individuals with their own AI assistant, capable of organization, planning, and support, almost like a 'chief of staff'.
  • 🏛️ Suleyman calls for robust governance and oversight of AI, including the presence of technical experts in government and the willingness to experiment with regulation to ensure safety and address risks.
  • 🌍 The geopolitical landscape and tensions between major powers like the US and China pose challenges to achieving global governance structures for AI, but Suleyman stresses the importance of not demonizing other nations or engaging in a race to the bottom on values.
  • 🛠️ The hardware component of AI, particularly GPUs, is a critical area that requires attention due to the monopoly and concentration of chip manufacturing, which could affect the development and proliferation of AI.
  • 💡 Suleyman highlights the importance of creativity and innovation in AI, stating that models are not just regurgitating information but are capable of novel predictions and interpolations between concepts.
  • 🔮 While there is much debate about the singularity and existential risks of AI, Suleyman is skeptical of the singularity framing and believes existential risks are very low, focusing instead on the practical near-term capabilities and governance of AI.

Q & A

  • What is the main topic of discussion in the transcript?

    -The main topic of discussion is the impact of artificial intelligence, particularly generative AI models like Chat GPT, on the world and the future implications as presented in Mustafa's book 'The Coming Wave'.

  • Who is Mustafa and what are his credentials in the AI field?

    -Mustafa is the co-founder of Deep Mind and Inflection AI. He has considerable credibility in the AI field, having established two successful AI companies and contributing to the development of AI technology.

  • What is the potential of AI in transforming various sectors according to Mustafa?

    -According to Mustafa, AI has the potential to bring about massive efficiencies and innovation in various sectors such as agriculture, healthcare, education, and transportation, leading to an era of radical abundance.

  • What are some of the risks and downsides associated with AI that are discussed in the transcript?

    -Some of the risks and downsides include the potential for AI to be used for harmful purposes, the impact on jobs due to automation, the spread of misinformation through deep fakes, and the challenges to liberal democracy and governance.

  • What is the 'lump of labor fallacy' mentioned in the transcript?

    -The 'lump of labor fallacy' is the belief that there is a fixed amount of work to be done, and with automation, there will be fewer jobs available for humans. Mustafa argues that history has shown that new jobs and roles are created as old ones become automated.

  • What is Mustafa's view on the role of government in managing the impact of AI?

    -Mustafa believes that governments should have technical and engineering expertise, take risks with regulation, and be involved in the creation of technology to deeply understand and control it effectively.

  • What is the significance of the 'voluntary commitments' mentioned in the transcript?

    -The voluntary commitments are a set of guidelines that AI companies have agreed to follow, which include exposing their models to independent scrutiny and sharing weaknesses publicly. These commitments are a precursor to future regulation and are meant to ensure safety and ethical use of AI.

  • How does Mustafa address the concern about AI and existential risks?

    -Mustafa considers the risk of AI leading to existential catastrophe to be very low. He emphasizes that the focus should be on the practical near-term capabilities of AI and the consequences for society and nation-states.

  • What is the 'Pi' AI that Mustafa mentions in the transcript?

    -Pi is a personal intelligence AI developed by Mustafa's company, Inflection AI. It is designed to be a conversational partner with high emotional intelligence, providing support and assistance to users in a personalized manner.

  • What is the potential impact of AI on climate change and environmental issues?

    -AI has the potential to help address climate change by optimizing industrial systems for greater efficiency, aiding in the development of more resilient crops, and contributing to the invention of new solutions to environmental problems.

Outlines

00:00

📚 Introduction to AI and the Impact of Chat GPT

The speaker opens the discussion by highlighting the significance of AI, particularly generative AI models like Chat GPT, which has garnered widespread attention since its release in November last year. The audience's familiarity with Chat GPT is assessed through a show of hands, indicating its prevalence. The speaker introduces Mustafa, a co-founder of Deep Mind and Inflection AI, who has written a compelling book on the future of AI. Mustafa's background in philosophy and theology at Oxford is mentioned, along with his transition to the tech industry and his contributions to the field of AI.

05:01

🤖 Mustafa's Journey from Philosophy to AI Entrepreneur

Mustafa shares his personal journey, starting with his studies in philosophy and theology at Oxford, his dropout to pursue a greater impact in the world, and his work with a charity to establish a telephone counseling service. He discusses his shift from non-profit work to the realization of the potential of technology, inspired by the rapid growth of Facebook. Mustafa's quest to learn about technology led him to various business ventures, including an unsuccessful attempt at providing Wi-Fi infrastructure for restaurants. His eventual partnership with Demis Hassabis, co-founder of Deep Mind, is highlighted, along with their ambitious goal to create AI capable of replicating or even surpassing human intelligence.

10:03

🚀 Deep Mind's Early Days and the Bet on Deep Learning

Mustafa reflects on the early days of Deep Mind, set in 2010, when the founders had a vision of creating an AI that could match human intelligence. The speaker emphasizes the significant bet they placed on deep learning, a technology that was not yet widely adopted. Mustafa mentions that some of the key figures in the AI industry, including the 'Godfather of AI' Jeffrey Hinton, were involved with Deep Mind in its early stages. The speaker also notes the importance of timing and being ahead of the curve in the AI revolution.

15:04

🌐 The Generative Revolution and Future Predictions

The speaker discusses the transition from the classification revolution to the generative revolution in AI, where models are now capable of producing new content rather than just classifying existing data. Mustafa predicts that in the next 5 years, AI will reach human-level capability across various tasks, leading to significant changes in innovation and management efficiency. He describes a future where AI can plan across multiple time horizons, from generating new product ideas to researching, manufacturing, and marketing them.

20:05

🌱 The Positive Impact of AI on Society and the Environment

Mustafa outlines the potential positive impacts of AI, such as its ability to contribute to solving climate change, improving healthcare, and increasing efficiency in various sectors. He emphasizes that intelligence has been the driving force behind human creation and innovation, and AI is an extension of that, capable of discovering new knowledge and inventing solutions to problems. Mustafa envisions a future of radical abundance, where AI serves as a scientific advisor, research assistant, tutor, coach, and confidant for everyone.

25:05

🔒 The Risks and Containment of AI Technologies

The speaker shifts the focus to the potential risks of AI, including the possibility of AI models providing information on harmful activities, such as manufacturing biological and chemical weapons. Mustafa discusses the challenges of controlling AI models, especially open-source ones, and the potential consequences of powerful AI falling into the wrong hands. He also addresses the relationship between large tech companies and nation-states in overseeing AI development and ensuring accountability.

30:08

🏛️ The Future of Work and the Role of Government in AI Oversight

Mustafa and the speaker debate the future of jobs in the context of AI, with Mustafa arguing against the idea of mass unemployment due to AI. He discusses the historical trend of job creation following automation and the potential for AI to解放people from the obligation to work, leading to a focus on well-being and prosperity. The conversation then turns to the role of government in AI oversight, with Mustafa advocating for governments to build technology, employ technical experts, and take risks with regulation to ensure the safe and beneficial development of AI.

35:10

🌐 Geopolitical Tensions and the Global Governance of AI

The discussion moves to the geopolitical implications of AI, with the speaker raising concerns about the tensions between the US and China and the race for global dominance. Mustafa emphasizes the importance of not demonizing China and focusing on the actions they are taking, which are driven by self-preservation instincts. He also stresses the need for good governance and oversight, mentioning the EU AI Act as an example of robust regulation and the importance of not engaging in a race to the bottom on values.

40:14

🛠️ The Importance of Hardware in AI Development

The speaker and Mustafa discuss the importance of hardware in AI, particularly GPUs, and the current monopoly on chip manufacturing. Mustafa explains the narrow supply chain and the potential implications for regulation and access to critical chips. He also touches on the potential for open-source hardware to contribute to the development of AI, while acknowledging the challenges and limitations associated with it.

45:14

🌍 Global Participation in AI Development

Mustafa addresses the global nature of AI development, noting the significant contributions from Chinese scientists and the importance of including them in the conversation. He refutes the stereotype of Chinese scientists as mere copiers, highlighting their creativity and desire to build their own businesses and services using AI technologies.

50:15

💡 AI as a Tool for Creativity and Innovation

The speaker explores the potential of AI to contribute to creativity and innovation, questioning whether AI could develop ideas like the concept of Apple. Mustafa explains that AI models are capable of interpolation, combining existing ideas to create novel predictions. He believes AI will aid human creativity for the foreseeable future, rather than operating independently.

55:15

🏦 The Role of Regulation and Self-Governance in AI

Mustafa discusses the importance of independent external technical expertise in governing AI and the challenges of finding competent regulators. He mentions the voluntary commitments made by AI companies, including transparency and sharing of weaknesses, as a precursor to future regulation. The speaker raises concerns about conflicts of interest, to which Mustafa acknowledges the inherent conflicts in for-profit companies but emphasizes the steps taken to address them.

00:19

🌐 The Global Impact of AI on Inequality and Culture

The conversation concludes with a discussion on the impact of AI on inequality and culture, particularly the representation of non-English speaking communities. Mustafa acknowledges the current limitations of AI in non-major languages and the importance of reflecting diverse cultures in training data. He also addresses the potential for AI to exacerbate existing inequalities, while also providing tools for broader access and participation.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is the central theme, with a focus on generative AI models like Chat GPT, which have the potential to revolutionize various aspects of society. The script discusses AI's impact on jobs, governance, and the potential existential risks associated with its development.

💡Generative AI

Generative AI models are a subset of AI that can create new content, such as text, images, or music, that is not just a replication of existing data. The script highlights generative AI's ability to produce novel outputs, which is a significant shift from the traditional use of AI for classification tasks.

💡Deep Mind

Deep Mind is a UK-based AI company co-founded by Mustafa Suleyman, one of the speakers in the video. It is known for its advancements in creating AI that can replicate human intelligence. The script mentions Deep Mind as an example of successful AI companies that are shaping the future of technology.

💡Inference AI

Inference AI is another AI company co-founded by Mustafa Suleyman, mentioned in the script as part of his background in the AI industry. The company's focus is on creating AI models that can understand and make decisions based on data, contributing to the broader AI ecosystem.

💡Personal Intelligence (Pi)

Personal Intelligence, or Pi, as referred to in the script, is an AI concept where each individual has their own AI assistant. This AI acts as a personal confidant, advisor, and helper, designed to be on the user's side and assist with organization, understanding, and navigating the world. Pi is portrayed as a future aspect of AI that will become commonplace.

💡Existential Risk

Existential risk in the context of the video refers to the potential threats posed by AI that could lead to the extinction or significant harm of humanity. The script discusses concerns about AI's downsides, such as the creation of harmful content or the potential for AI to be misused by malicious actors.

💡Computational Power

The script emphasizes the exponential growth in computational power used for AI models, noting that this has increased by 10 times each year for a decade. This growth is fundamental to the advancement of AI capabilities, enabling more complex and accurate AI models to be developed.

💡Open Source AI

Open Source AI models are AI systems whose code is publicly available, allowing anyone to use, modify, and distribute the technology. The script discusses the challenges of controlling open source AI models, which can potentially fall into the wrong hands and be used for harmful purposes.

💡Regulation

Regulation in the context of AI refers to the establishment of rules and oversight mechanisms to govern the development and use of AI technologies. The script calls for responsible regulation to ensure that AI is developed and used ethically and safely, without causing harm to society.

💡Redistribution

Redistribution in the video refers to the idea of sharing the benefits and wealth generated by AI more evenly across society. The script discusses the potential for AI to exacerbate inequality and the need for governments to consider how to distribute the benefits of AI advancements fairly.

💡Meritocracy

Meritocracy is the idea that the best ideas and innovations should rise to the top based on their merit, regardless of the status or background of their originators. The script mentions meritocracy in the context of AI, suggesting that AI technologies should be accessible to everyone, not just a privileged few.

Highlights

Chat GPT's release in November last year marked a global realization of generative AI's potential to change the world.

Mustafa Suleyman, co-founder of DeepMind and Inflection AI, offers a compelling look at AI's future in his book, 'The Coming Wave'.

Suleyman's background in philosophy and theology at Oxford led to a unique approach to founding tech companies.

DeepMind's early focus on deep learning positioned it ahead of the AI curve, attracting notable figures in the field.

Suleyman predicts AI will reach human-level capability across various tasks within the next 3 to 5 years, leading to significant societal shifts.

AI's generative revolution is enabling models to produce novel content, unlike the classification tasks of the past decade.

The exponential growth in computational power dedicated to AI models is unprecedented in technology history.

Suleyman envisions a future where personal AI assistants function as chiefs of staff, prioritizing and supporting individuals.

AI has the potential to solve complex issues like climate change and improve healthcare through increased efficiency and invention.

Suleyman addresses the risks of AI proliferation, including the empowerment of harmful actors and the challenges of containment.

The debate on AI's impact on jobs reflects a historical pattern, but the future may see a shift towards less work and more leisure.

Suleyman argues for a focus on redistribution of wealth and a reevaluation of work's role in society as AI advances.

The rise of AI may challenge liberal democracy, with concerns about deep fakes and misinformation in the political sphere.

Suleyman calls for robust governance and oversight to ensure AI technologies are developed and used responsibly.

The potential environmental impact of AI's energy consumption is mitigated by the move towards renewable energy in data centers.

Suleyman discusses the importance of education and the role of AI in democratizing access to high-quality learning resources.

The future of AI governance includes voluntary commitments from tech companies and potential regulation to ensure safety.

Transcripts

play00:00

hello everybody it's great to see my

play00:01

gosh so many friends actually and indeed

play00:03

my husband which is a bit alarming he

play00:04

never turns up to any event I do uh but

play00:07

um it is great to be here uh to talk

play00:11

with about literally one of the hottest

play00:14

topics of the moment with someone who

play00:17

has written one of the best books about

play00:18

it um how many of you have used chat GPT

play00:22

just a show of

play00:23

hands virtually everybody I think you'd

play00:27

probably then agree that chat GPT W came

play00:29

November last year and it was only then

play00:31

that most people realized that

play00:34

artificial intelligence generative AI

play00:36

models in particular were about to

play00:39

change the world and suddenly there was

play00:40

a kind of collective Global oh my God

play00:43

this capability is extraordinary and

play00:46

it's been reflected in endless numbers

play00:48

of editorials hand ringing politicians

play00:51

and I think I'm right in saying the main

play00:53

focus has been on the downsides everyone

play00:55

has their pet view of what the odds are

play00:57

of existential risk are we all going to

play00:59

kill ourselves it's all terrible and

play01:02

Mustafa comes into this as a man with

play01:05

considerable credibility he is a man who

play01:07

has co-founded not just one but two

play01:09

successful AI companies uh and he's a

play01:12

man who in this book takes a

play01:16

sober realistic and actually very

play01:19

compelling look at what lies ahead of us

play01:21

and so that's why you really should read

play01:23

it it's great I've read it twice uh you

play01:25

should read it Mustafa just to give you

play01:28

some he doesn't need much introduction I

play01:29

don't think I think to this group but he

play01:31

was a co-founder of Deep Mind back in

play01:34

2010 uh he then was a co-founder of

play01:36

inflection AI with Reed Hoffman Reed

play01:39

Hoffman his co-founder has with the help

play01:41

of J chat GT Jack GPT written an

play01:43

extremely upbeat view of the potential

play01:45

of this technology so I'd love to know

play01:47

the debates between the two of you um he

play01:50

was got a CBE a few years ago for his

play01:52

Visionary services and influence in the

play01:54

UK technology sector um he is also on

play01:58

the board of The Economist so I get to

play02:00

see Mustafa working up close um uh he's

play02:03

a friend of The Economist friend of uh

play02:05

and and great figure in British

play02:07

technology but I think and the place to

play02:10

start with this book and the book is

play02:11

called the coming wave and you will know

play02:13

that there has been if you've turned on

play02:14

your TV or listen to a podcast recently

play02:16

you will know that never mind the coming

play02:18

wave there is already a wave of um

play02:21

publicity and people being impressed

play02:23

with this book I believe you've had you

play02:25

told me 60 appearances of various sorts

play02:27

so consider yourselves lucky or 61 on

play02:29

this list uh but understandably the work

play02:32

the book has had a tremendous impact

play02:33

because it is very interesting very

play02:35

thoughtful and it's on the hottest topic

play02:36

of the moment so we want to talk most of

play02:38

the time about the book but I do want to

play02:40

for those of you who don't know Mustafa

play02:41

to get a little bit of background and

play02:44

the first is that Mustafa is actually

play02:46

not a computer geek you didn't study

play02:48

computer code right you studied

play02:50

philosophy and theology at Oxford so can

play02:52

you just give us the kind of poted

play02:54

history about how a man who studied

play02:55

philosophy and theology comes to be the

play02:58

co-founder of two tech companies what

play03:00

are you

play03:02

doing well I've always found philosophy

play03:05

a systems thinking tool it's enables me

play03:08

to be rigorous and clear about what I

play03:10

think and you know right from the very

play03:13

outset I think when I was 19 I actually

play03:15

dropped out of My Philosophy degree no I

play03:17

didn't know that yeah I didn't finish

play03:20

and I was really motivated by the impact

play03:23

that I could have in the world I left to

play03:25

help start a charity um at the time it

play03:28

was a telephone Counseling Service um

play03:31

called Muslim youth helpline and it was

play03:33

a secular I was an atheist even though I

play03:36

had grown up uh with a Muslim background

play03:39

uh it was a secular service that was

play03:41

designed to provide faith and culturally

play03:44

sensitive um support to Young British

play03:46

Muslims this was in

play03:48

2003 and you know I I found myself at

play03:52

Oxford studying this very theoretical

play03:55

esoteric you know set of ideas and I

play03:58

wanted to put real things into practice

play04:00

in terms of my ethics and that was why I

play04:02

went to you know start the helpl line

play04:04

and worked on that as a volunteer for 3

play04:06

years uh I soon got you know frustrated

play04:09

about the scale of impact um in our

play04:12

nonprofit uh and I worked briefly for

play04:14

the mayor of London at the time Ken

play04:15

Livingston um as a human rights policy

play04:18

officer um and you know that was that

play04:21

was inspiring but I was also struggling

play04:23

with the scale of impact I I I realized

play04:27

that you know if if I didn't capture

play04:29

what what really makes us organized and

play04:31

effective as a species The Profit

play04:33

incentive then I was going to miss one

play04:34

of the most important things to happen

play04:36

in my lifetime and um at the time I saw

play04:40

the rise of Facebook this was sort of

play04:42

around 20072 2008 and it had grown in

play04:46

the space of two years to 100 million

play04:48

monthly active users and I was totally

play04:51

blown away at how quickly this was

play04:54

growing out of seemingly nowhere

play04:56

something completely new to me and so I

play04:58

set about on a quest to find anyone and

play05:00

everyone that would speak to me to teach

play05:02

me about technology I had started a

play05:04

bunch of businesses before that two

play05:06

different businesses one actually a

play05:08

technology company selling electronic

play05:10

point of sale systems actually around

play05:12

here in Notting Hill uh in restaurants

play05:15

uh trying to put Wi-Fi infrastructure in

play05:17

there and so on that was a that was

play05:18

unsuccessful that was ahead of its time

play05:21

um and so I was looking for people who I

play05:22

could you know form a new partnership

play05:25

with and figure out how to take

play05:26

advantage of of of Technology uh and

play05:29

that's where I met my friend and

play05:31

co-founder of deep mind emis aabus

play05:33

because he was the brother of my best

play05:35

friend at the time from school um and he

play05:38

was just finishing his PhD uh in

play05:40

neuroscience at UCL and we got together

play05:42

and you know the rest is history and at

play05:44

that time you know back in 2010 you had

play05:46

between you and there was another

play05:48

co-founder right Shane Lake the three of

play05:49

you had the ambition that you were going

play05:51

to create an artificial intelligence

play05:54

that was you know capable of replicating

play05:57

human intelligence or even succeeding it

play05:59

so just just think this was 13 years ago

play06:01

the rest of us didn't even know this

play06:02

stuff was really going on you're you're

play06:04

in your where is it in region Square

play06:06

somewhere did you imagine that by 2023

play06:10

the world would have what we have now I

play06:13

mean in a way yes it was difficult for

play06:16

us to imagine exactly how it would

play06:18

unfold but we made a very big bet on

play06:21

deep learning uh which is one of the

play06:23

primary tools that is powering this new

play06:26

Revolution um before anybody was

play06:28

involved in deep learning so the the

play06:30

current chief scientist and co-founder

play06:33

of open aai the creators of chat GPT was

play06:37

one of our interns uh back in 2011

play06:40

Jeffrey Hinton who was the who

play06:43

subsequently became the um one of the

play06:46

heads of AI at Google and is known now

play06:48

as the Godfather of AI recently in the

play06:51

Press worried about the consequences he

play06:54

was our first advisor our paid advisor I

play06:56

think his salary was £25,000 a year to

play06:59

us so I think three of the six

play07:02

co-founders of open AI at some point

play07:04

passed through deep mind either to give

play07:06

talks or were actually members of the

play07:07

team so really is incredibly about

play07:11

timing you know we got the timing

play07:13

absolutely right we were way ahead of

play07:15

the curve at that moment and somehow we

play07:17

managed to hang on so you you were there

play07:19

for a while and then let's fast forward

play07:20

a bit um you can read the rest of this

play07:22

in the book you you now have co-founded

play07:25

and run inflection Ai and you are

play07:27

creating an AI called Pi which you can

play07:31

interact with if you'd like tell us what

play07:33

pi does so Pi stands for personal

play07:37

intelligence and I believe that over the

play07:40

next few years everybody is going to

play07:43

have their own personal AI there are

play07:46

going to be hundreds of thousands of AIS

play07:48

in the world they'll represent

play07:50

businesses they'll represent Brands

play07:53

every government will have its own AI

play07:55

every nonprofit every musician artist

play07:59

record label everything that is now

play08:02

represented by a website or an app is

play08:06

soon going to be represented by an

play08:08

interactive conversational intelligence

play08:11

service that represents the B brand

play08:13

values and the ideas of whatever

play08:16

organization is out there and we believe

play08:19

that at the same time everybody will

play08:21

want their own personal AI one that is

play08:23

on your side in your corner helping you

play08:26

to be more organized helping you to make

play08:28

sense of the world um it really is going

play08:31

to function as almost like a chief of

play08:33

staff all you know prioritizing planning

play08:36

teaching

play08:37

supporting supporting you so that sounds

play08:40

great um what does it actually mean

play08:43

though in practice because so often this

play08:44

conversation about AI it's at this point

play08:46

then it turns into the apocalyptic we're

play08:48

going to end up you know wiping

play08:49

ourselves out because there'll be some

play08:51

Rogue person you know sitting in a

play08:53

garage somewhere who will you know

play08:54

unleash a virus that will kill us all so

play08:56

before we get to all of that stuff in

play08:59

let's say say I don't know 5 years I

play09:02

you've said within the next 3 to 5 years

play09:05

you think AI will reach human level

play09:08

capability across a variety of tasks

play09:10

perhaps not everything but a variety so

play09:13

paint a picture for us of what life will

play09:16

be like in five years at 2028 I first of

play09:19

all will it be you and me here or will

play09:21

there be the kind of Mustafa Ai and the

play09:25

bot okay let me let me just go back 10

play09:28

years just to to give you a sense for

play09:31

what has already happened and why the

play09:34

predictions that I'll make I think are

play09:36

plausible so the Deep learning

play09:39

Revolution enabled us to make sense of

play09:41

raw messy data so we could use AIS to

play09:47

interpret the content of images classify

play09:50

whether an image contains dogs or cats

play09:53

what those pixels actually mean we can

play09:56

use it to understand speech so when you

play09:58

dictate into your phone and it

play10:00

transcribes it and Records perfect text

play10:03

we can use it to do language translation

play10:05

all of these are classification tasks

play10:08

we're essentially teaching the models to

play10:10

understand the messy complicated world

play10:12

of raw input data well enough to

play10:15

understand the objects inside that data

play10:18

that was the classification Revolution

play10:21

the first 10 years now we're in the

play10:23

generative Revolution right so these

play10:25

models are now producing new images that

play10:27

you've never seen before they're

play10:29

producing new text that you've never

play10:30

seen before they can generate pieces of

play10:32

music and that's because it's the flip

play10:34

side of that coin the first stage is

play10:37

understanding and classifying if you

play10:39

like the second stage having done that

play10:41

well enough you can then ask the AI to

play10:43

say given that you understand you know

play10:46

what a dog looks like now generate me a

play10:48

dog with your idea of pink with your

play10:51

idea of yellow spots or whatever and

play10:53

that is an interpolation it's a

play10:55

prediction of the space between two or

play10:58

three or four

play10:59

Concepts and that's what's produced this

play11:01

generative AI revolution in all of the

play11:04

modalities as we apply more computation

play11:07

to this process so we're basically

play11:09

stacking much much larger AI models and

play11:13

we're stacking much much larger data the

play11:15

accuracy and the quality of these

play11:17

generative AIS gets much much better so

play11:20

just to give you a sense of the

play11:22

trajectory we're on with respect of

play11:24

computation over the last 10 years every

play11:28

single year the amount of compute that

play11:31

we have used for The Cutting Edge AI

play11:33

models has grown by 10x so 10x 10x 10x

play11:39

10x 10 times in a row now that is

play11:42

unprecedented in technology history

play11:44

nowhere else have we seen a trajectory

play11:47

anything like that over the next 5 years

play11:50

We'll add probably three or four orders

play11:52

of magnitude basically another thousand

play11:55

times the compute that you see used

play11:57

today to produce gbt 4 or the chat model

play12:00

that you might interact with and it's

play12:02

really important to understand that that

play12:03

might be a technical detail or something

play12:05

but it's important to grab like sort of

play12:07

grasp that because when people talk

play12:10

about gbt 3 or gbt 3.5 or gbt 4 the

play12:14

distance between those models is in fact

play12:17

10 times compute it's not incremental

play12:20

it's exponential and so the difference

play12:23

between gbt 4 and gbt2 is in fact a 100

play12:27

times worth of compute the largest

play12:29

compute infrastructures in the world

play12:31

basically to learn all the relationships

play12:33

between all the inputs of all of this

play12:36

raw data so what does that mean what

play12:39

does that entail enable them to do in

play12:41

the next phase we'll go from being able

play12:44

to perfectly generate so speech will be

play12:47

perfect video generation will be perfect

play12:50

image generation will be perfect

play12:51

language generation will be perfect to

play12:53

now being able to plan across multiple

play12:57

time Horizons so at the moment you could

play12:59

only say to a model give me you know a

play13:01

poem in the style of X give me a new

play13:04

image that matches these two Styles it's

play13:07

a sort of oneshot prediction next you'll

play13:09

be able to say generate me a new product

play13:13

right in order to do that you would need

play13:15

to have the AI go off and do research to

play13:19

you know look at the market and see what

play13:20

was potentially going to sell what are

play13:22

people talking about at the moment it

play13:24

would then need to generate a new image

play13:27

of what that product might look like

play13:28

compared compared to other images so

play13:30

that it was different and unique it

play13:32

would then need to go and contact a

play13:34

manufacturer and say Here's the

play13:35

blueprint this is what I want you to

play13:37

make it might negotiate with that

play13:39

manufacturer to get the best possible

play13:41

price and then go and Market it and sell

play13:43

it those are the capabilities that are

play13:45

going to arrive you know approximately

play13:48

in the next 5 years it won't be able to

play13:49

do each of those automatically

play13:51

independently there will be no autonomy

play13:53

in that system but certainly those

play13:56

individual tasks are likely to

play13:59

so that means that presumably the

play14:02

process of innovation becomes much much

play14:04

more efficient the process of managing

play14:06

things becomes much more efficient what

play14:08

does that mean and let's let's stick

play14:09

with the upside for the moment I will I

play14:11

promise you we'll get to all the

play14:12

downsides of which there are many but

play14:14

but what is that going to enable us to

play14:17

do I mean people talk about AI will help

play14:20

us solve climate change AI will lead to

play14:22

tremendous you know improvements in

play14:24

healthcare just talk us through what

play14:26

some of those things might be so we can

play14:27

see the upside

play14:30

intelligence has been the engine of

play14:32

creation everything that you see around

play14:35

you here is the product of us

play14:37

interacting with some environment to

play14:39

make a more efficient a more a cheaper

play14:42

table for example or a new iPad if you

play14:45

look back at history you know today

play14:47

we're able to create we're able to

play14:49

produce a kilo of grain with just 2% of

play14:53

the labor that was required to produce

play14:55

that same one kilo of grain 100 years

play14:58

ago so the trajectory of Technologies

play15:01

and scientific invention in general

play15:03

means that things are getting cheaper

play15:05

and easier to make and that means huge

play15:08

productivity gains right the insights

play15:11

the intelligence that goes into all of

play15:13

the improvements in agriculture which

play15:15

give us more with less are the same

play15:17

tools that we're now inventing with

play15:19

respect to intelligence so for example

play15:21

to stay on the theme of Agriculture it

play15:24

should mean that we're able to produce

play15:26

new crops that are drought resistant

play15:28

that are pest resistant that are in

play15:30

general more resilient we should be able

play15:32

to to tackle for example climate change

play15:35

and we've seen many applications of AI

play15:37

where we're optimizing existing

play15:39

Industrial Systems we're taking the same

play15:42

big cooling infrastructure for example

play15:44

and we're making it much more efficient

play15:46

again we're doing more with less so in

play15:48

every area from healthare to education

play15:50

to Transportation we're very likely over

play15:54

the next two to three decades to see

play15:56

massive efficiencies invention think of

play15:59

it as the interpolation I described with

play16:02

respect to the images the the the AI is

play16:05

guessing the space between the dog the

play16:08

pink color and the yellow spots it's

play16:12

imagining something it's never seen

play16:14

before and that's exactly what we want

play16:16

from AI we want to discover new

play16:19

knowledge we want it to invent new types

play16:21

of science new solutions to problems and

play16:24

I think that's really what we're likely

play16:25

to get we I believe that if we can get

play16:28

that right we're headed towards an era

play16:31

of radical abundance imagine every great

play16:35

scientist every entrepreneur you know

play16:38

every person having the best possible

play16:40

Aid you know Scientific Advisor research

play16:44

assistant chief of staff tutor coach

play16:48

Confidant each of those roles that are

play16:51

today the you know exclusive Preserve of

play16:54

the wealthy and the educated and those

play16:56

of us who live in peaceful civilized

play16:57

societies those roles those capabilities

play17:01

that intelligence is going to be widely

play17:03

available to everybody in the world just

play17:06

as today no matter whether you are a you

play17:09

know a millionaire or you earn a regular

play17:12

salary we all get exactly the same

play17:14

access to the best smartphone and the

play17:17

best laptop that's an incredibly

play17:19

meritocratic story which we kind of have

play17:21

to internalize you know the Best

play17:23

Hardware in the world no matter how rich

play17:25

you are is available to at least the top

play17:27

two billion people

play17:29

and that is I think that is going to be

play17:31

the story that we see with respect to

play17:32

intelligence all right enough upbeat

play17:34

stuff that was that was we've had 20

play17:36

minutes of upbeat which is more than

play17:37

you've had in most of the the interviews

play17:40

you've done uh but you didn't call your

play17:43

book you know the coming Nana you called

play17:46

it the coming wave and I'm told that you

play17:48

were thinking that the original title

play17:49

was going to be containment is not

play17:51

possible I'm glad you didn't call it

play17:53

that it wouldn't have sold so well uh

play17:56

but explain the argument you're make

play17:58

making is not actually nirada is around

play18:01

the corner in fact it's a much much more

play18:03

subtle argument than that so tell us

play18:06

what the downsides are and what it is

play18:08

that your book the focus on containment

play18:10

is in the book is about yeah I mean I I

play18:12

think I'm pretty wide-eyed and honest

play18:14

about the potential risks and you know

play18:17

we if if you take the trajectory that I

play18:20

predicted that more powerful models are

play18:23

going to get smaller cheaper and easier

play18:26

to use which is the history of the

play18:28

transition which is the history of every

play18:30

technology and you know value basically

play18:33

that we've created in the world if it's

play18:35

useful then it tends to get cheaper and

play18:37

therefore it spreads far and wide and in

play18:39

general so far that has delivered

play18:42

immense benefits to everybody in the

play18:44

world and it's something to be

play18:45

celebrated proliferation so far has been

play18:48

a really really good thing but the flip

play18:51

side is that if these are really

play18:53

powerful tools they could ultimately

play18:56

Empower a vast array of bad factors to

play19:00

destabilize our world you know everybody

play19:03

has an agenda has a set of political

play19:05

beliefs religious beliefs cultural ideas

play19:08

and they're now going to have an easier

play19:09

time of advocating for it you know so at

play19:12

the extreme end of the spectrum you know

play19:14

there are certain aspects of these

play19:16

models which provide really good

play19:17

coaching on how to manufacture

play19:20

biological and chemical weapons it's one

play19:22

of the capabilities that all of us

play19:24

developing large language models over

play19:26

the last year have observed they've been

play19:28

trained on all of the data on the

play19:30

internet and much of that information

play19:32

contains potentially harmful things

play19:34

that's a relatively easy thing to

play19:36

control and take out of the model at

play19:39

least when you're using a model that is

play19:40

manufactured by one of the big companies

play19:43

they want to abide by the law they don't

play19:45

want to cause harm so we basically

play19:47

exclude them from the training data and

play19:49

we prevent those capabilities the

play19:51

challenge that we have is that everybody

play19:55

wants to get access to these models and

play19:56

so they're widely available in open

play19:59

source you know you can actually

play20:01

download the code to run albeit smaller

play20:04

versions of Pi or chat GPT for no cost

play20:09

and if that trajectory continues over 10

play20:11

years you get much much more powerful

play20:13

models that are much smaller and more

play20:16

you know

play20:17

transferable and you know people then

play20:19

who want to use them to cause harm have

play20:21

an easier time of it I think that's a

play20:23

really important distinction that there

play20:25

are you know the leading companies you

play20:28

Google deep mine you know open AI who

play20:32

have the biggest models now and they're

play20:33

a relatively small number of these ones

play20:35

and they are bigger and more powerful

play20:37

but not far behind are a whole bunch of

play20:41

open- source ones and so the question is

play20:43

then for your containment c c can you

play20:47

prevent the open-source ones which will

play20:49

potentially be available to the you know

play20:51

angry teenager in his garage or her

play20:53

garage can those ones be controlled or

play20:56

not okay the darker side of my

play20:59

prediction is that these are

play21:01

fundamentally

play21:03

ideas you know they're they're

play21:05

intellectual property it's knowledge and

play21:07

knowhow an algorithm is something that

play21:09

can largely be expressed on three sheets

play21:12

of paper and actually is readily

play21:14

understandable to most people you it's a

play21:16

little bit abstract but it you can wrap

play21:18

your head around it the implementation

play21:20

mechanism you know requires access to

play21:23

vast amounts of compute today but if in

play21:25

time you remove that constraint and you

play21:27

can actually run on a phone which you

play21:30

ultimately will be able to do in a

play21:32

decade then that's where the containment

play21:34

challenge you know comes into view and I

play21:36

think that there are also risks of the

play21:38

central centralized question right this

play21:40

is clearly going to confer power on

play21:43

those who are building these models and

play21:44

running them you know my own company

play21:46

included Google and the other big Tech

play21:48

providers so we don't eliminate risk

play21:51

simply by addressing the open source

play21:52

Community we also have to figure out

play21:54

what the relationship is between these

play21:56

super powerful tech companies that have

play21:59

lots of resources and the nation state

play22:01

itself which is ultimately responsible

play22:03

for holding us accountable so let's go

play22:06

through some of the most s of frequently

play22:08

cited risks or indeed negative

play22:10

consequences and and the one that that

play22:12

you hear a lot is as AIS become you know

play22:17

equivalent to or exceed human

play22:19

intelligence across a wide range of

play22:20

tasks there won't be any jobs for any of

play22:22

us you know why would you employ a human

play22:24

if you could have an AI so history

play22:26

suggests that that's bunam you know

play22:29

we've never yet run out of jobs and you

play22:31

know being a good paid up Economist I

play22:33

think it's a lump of Labor fallacy but

play22:35

lots and lots and lots of people say

play22:36

this what's going to happen to the jobs

play22:38

where are you on that well let's just

play22:39

describe the lump of Labor fallacy

play22:41

because I think it's important to sit

play22:42

with that because that is the historical

play22:44

Trend so far what it basically means is

play22:46

when we have when we automate things and

play22:48

we make things more efficient we we

play22:50

create more time for people to invent

play22:53

new things and we create more health and

play22:55

wealth and that in itself creates more

play22:56

demand and then we we end up creating

play22:59

new goods and services to satisfy that

play23:01

demand and so we'll continually just

play23:03

keep creating new jobs and roles and you

play23:06

can see that in the last couple decades

play23:07

there are many many roles that couldn't

play23:09

even have been conceived of 30 years ago

play23:12

from App designer all the way through to

play23:14

the present day prompt engineer of a

play23:16

large language model so that's one

play23:18

trajectory that is

play23:20

likely I think the question about what

play23:23

happens with jobs depends on your time

play23:25

Horizon so over the over the next two

play23:28

decades I think it's highly unlikely

play23:30

that we will see structural

play23:32

disemployment where people want to

play23:34

contribute their labor to the market and

play23:36

they just can't compete I think that's

play23:38

pretty unlikely there's certainly no

play23:40

evidence of it in the statistics today

play23:43

beyond that I do think it's possible

play23:46

that many people won't be able to even

play23:49

with an AI produce things that are of

play23:52

sufficient value that the market wants

play23:54

them and their AI jointly in the system

play23:57

I mean AIS are increasingly more

play23:59

accurate than humans they are more

play24:01

reliable they can work

play24:03

24/7 they're you know more stable and so

play24:07

you know I I I think that that's

play24:09

definitely a risk and I think that we

play24:10

should lean into that and be honest with

play24:13

ourselves that that is actually maybe an

play24:17

interesting and important destination I

play24:19

mean work isn't the goal of society

play24:22

sometimes I think we've just forgotten

play24:24

that actually society and life and

play24:26

civilization is about well-being and

play24:30

peace and prosperity it's about creating

play24:32

more efficient ways to keep us

play24:34

productive and healthy many people you

play24:37

know probably in this room and including

play24:39

us enjoy our work we love our work and

play24:41

we're lucky enough and we're privileged

play24:43

enough to have the opportunity to do

play24:45

exactly the work that we want I think

play24:47

it's super important to remember that

play24:48

many many people don't have that luxury

play24:50

and many people do jobs that they would

play24:52

never do if they didn't have to work and

play24:54

so to me the goal of society is a quest

play24:57

for radical abundance how can we create

play25:00

more with radically less and liberate

play25:03

people from the obligation to work and

play25:05

that means that we have to figure out

play25:06

the question of redistribution and

play25:08

obviously that is an incredibly hard one

play25:10

and obviously I address it in the book

play25:11

but is that's the thing that we have to

play25:13

focus on what does taxation look like in

play25:15

this new regime how do we capture the

play25:17

value that is created make sure that

play25:19

it's actually converted into Dollars

play25:20

rather than just a sort of value add to

play25:23

to GDP so we're going to get on to

play25:25

redistribution of the role of government

play25:26

in just a second but first to REM remind

play25:28

you and I should have said this at the

play25:29

beginning Mustafa and I are going to

play25:31

talk for perhaps another 15 20 minutes

play25:33

but then we're going to open it up to

play25:34

questions and for those of you who are

play25:36

watching on the live stream feel free to

play25:38

start asking them now because if this

play25:40

little AI that I have here is telling me

play25:42

that calls and notifications will be

play25:43

silenced that's not very helpful yeah

play25:45

now I've got an answer I do see the

play25:46

questions there so um please start

play25:48

writing in the questions and we will get

play25:50

to them in about 15 minutes but okay

play25:52

roll of government you need to have um

play25:55

you will in this world need more radical

play25:57

redistribution but one of the concerns

play26:00

is that Ai and the rise of AI makes

play26:04

actually the functioning of democracy

play26:05

ever harder we're already seeing lots of

play26:07

concerns about you know deep fakes

play26:10

wrecking the 2024 elections 4 billion

play26:13

people live in countries that will have

play26:15

elections next year people are worrying

play26:17

about 2024 never mind 28 or 34 and we

play26:20

just um Mustafa and I just had a

play26:23

conversation with yal Harari who is as

play26:25

pessimistic as you are um thoughtfully

play26:29

optimistic uh who basically said it was

play26:31

the end of democracy um uh I'm not sure

play26:33

that either you and I agreed but what is

play26:36

the consequence for Liberal democracy in

play26:39

the coming decades in this world of AI

play26:41

look I think the first thing to say is

play26:42

that the state we're in is is pretty

play26:45

bleak I mean trust in in governments and

play26:48

in politicians and the political process

play26:49

is as low as it has ever been um you

play26:52

know in in fact 35% of people

play26:55

interviewed in in a Pew study in the US

play26:57

think that army rule would be a good

play26:59

thing so we're already in a very fragile

play27:02

and anxious State and I think that the

play27:06

you know to sort of empathize with you

play27:08

Val for a moment the argument would be

play27:10

that you know these new technologies

play27:11

allow us to produce new forms of

play27:13

synthetic media that are persuasive and

play27:16

manipulative that are highly

play27:17

personalized and they exacerbate

play27:19

underlying fears right so I think that

play27:23

is a real risk we have to accept that

play27:25

it's going to be much easier and cheaper

play27:27

to produce fake news right we have an

play27:29

appetite an insatiable addictive

play27:32

dopamine hitting appetite for untruth

play27:36

you know it sells quicker it it spreads

play27:38

faster and that's a foundational

play27:40

question that we have to address I'm not

play27:42

sure that it's a new risk that AI

play27:45

imposes it's something that Ai and other

play27:47

Technologies accelerate you know in and

play27:50

that's the challenge of AI That's that

play27:51

is a good lens for understanding the

play27:54

impact that AI has in general it is

play27:55

going to amplify the very best of us and

play27:58

it's also going to amplify the very

play28:00

worst of us and what about the fact that

play28:02

this is developing in a world which

play28:05

geopolitically is split in a way that it

play28:08

hasn't been at least in the last couple

play28:11

of in the postc Cold War World at all so

play28:13

we have the tensions between the US and

play28:15

China we have essentially a a sort of

play28:17

race for Global dominance between these

play28:20

two regimes in that kind of a world how

play28:24

can you achieve the sort of governance

play28:26

structures that you write about in your

play28:28

book that are needed to try and you know

play28:30

perhaps prevent the most extreme

play28:33

downsides of AI yeah I mean much as I've

play28:35

been accused of being an optimist about

play28:38

it I've also been accused of being a

play28:39

utopian about the interventions that we

play28:41

have to make um and I think that

play28:44

unfortunately that's just a statement of

play28:46

fact what's required is good functioning

play28:49

governance and oversight I mean the the

play28:51

companies are open and willing to expose

play28:55

themselves to audit and to oversight and

play28:58

I think that is a unique moment relative

play29:00

to past generations of tech CEOs and

play29:04

inventors and creators across the board

play29:07

we're being very clear that the

play29:09

precautionary principle is probably

play29:11

needed and that's a moment when we have

play29:13

to go a little bit slower be a little

play29:15

bit more careful and maybe leave some of

play29:17

the benefits on the tree for a moment

play29:20

before we pick that fruit in order to

play29:22

avoid harms I I think that's a pretty

play29:25

novel you know setup as it is but it

play29:28

requires really good governance it

play29:30

requires functioning democracies it

play29:32

requires good oversight I think that we

play29:34

do actually have that in Europe I think

play29:36

that the EU AI act which has been in

play29:38

draft now for three and a half years is

play29:41

super thorough and very robust and

play29:43

pretty sensible um and so in general

play29:45

I've been you know a fan of it and kind

play29:47

of endorsing it but people often say

play29:50

well if we get it right in the UK or if

play29:52

we get it right in Europe and the US

play29:54

what about China I mean I hear this

play29:56

question over and over again what about

play29:58

about China and I I think that's a

play29:59

really dangerous line of reasoning first

play30:02

it sort of demonizes China as though

play30:05

China has this sort of like maniacal

play30:07

suicidal mission to at all costs at any

play30:11

cost you know sort of take over the

play30:13

world and you know be the next dominant

play30:15

Global power I mean so far I don't see

play30:17

any evidence of of that I mean you know

play30:19

I'm not ruling it out I'm not a you know

play30:22

sympathizer but I I think we should just

play30:24

be wide-eyed about the actions they're

play30:26

actually taking at the moment they have

play30:28

a self-preservation Instinct just as we

play30:30

do and the more that we can appeal to

play30:33

that you know desire to you know have

play30:36

their citizens benefit from economic

play30:39

interdependence and from peace and

play30:40

prosperity and well-being we're both

play30:43

aligned in those incentives I think the

play30:45

second thing is it's dangerous to sort

play30:47

of point the finger at you know China

play30:50

because actually we can't just have a

play30:52

race to the bottom on values we have to

play30:55

decide what we stand behind right if

play30:57

we're not you know I mean I I'm a

play31:00

believer that we shouldn't have a large

play31:01

scale State surveillance apparatus

play31:03

enabled by AI um we shouldn't do that

play31:06

just because China are doing it we

play31:08

shouldn't get into you know an arms race

play31:10

and take risks just because they're

play31:12

taking those risks and that's difficult

play31:14

for some people to accept because you

play31:15

know they might be hyper pragmatic and

play31:18

you know I think that that only leads to

play31:20

an inevitable self-fulfilling prophecy

play31:22

that we both end up taking terrible

play31:24

risks which are unnecessary so what

play31:27

should government or let's be concrete

play31:29

what should this government we're in the

play31:30

UK and presumably most people here are

play31:33

from London the the British government

play31:35

wants to be the superpower of AI an AI

play31:38

superpower um and is having an AI

play31:41

conference on AI safety in November

play31:43

there's a big Focus here what

play31:46

should this government or indeed other

play31:48

governments be doing concretely to

play31:50

minimize the risks what should be is

play31:51

there stuff that should be banned now is

play31:53

there rules that rules of the road that

play31:55

should be put in place so so the first

play31:58

thing is that governments have to build

play32:00

technology you know we've we've got into

play32:02

this habit of Outsourcing and

play32:05

commissioning third parties to create

play32:07

technology and I I think it's really

play32:09

difficult to be able to control what you

play32:11

don't understand and unless you build it

play32:14

you don't deeply understand it so I

play32:15

think that's just the first thing which

play32:17

in itself is very controversial when I

play32:19

propose that in government people sort

play32:20

of throw up their hands and there's a

play32:22

lack of will there's a lack of

play32:23

self-confidence there's a lack of belief

play32:25

that government can be a creator a maker

play32:28

especially on the technology front to do

play32:31

that I think the second thing is that we

play32:33

have to have deeply Technical and

play32:35

Engineering people as well as you know

play32:37

technologists more generally in cabinet

play32:40

positions and at the heads of every

play32:42

government Department you know it's it's

play32:44

pretty crazy to me that we don't have a

play32:46

CTO a chief technology officer in

play32:49

cabinet you know running our big

play32:51

institutions all of that is outsourced

play32:53

the challenge is to be able to do that

play32:56

you just have to pay close to private

play32:59

sector salaries again another highly

play33:01

sensitive topic that no one wants to

play33:03

talk about should never earn more than

play33:04

the Prime Minister you know to me this

play33:06

makes no sense how can we have an open

play33:08

labor market where on the one hand we're

play33:10

saying to people you know go work for

play33:12

whoever you like and on the one hand you

play33:15

know people are being paid 10x and on

play33:17

the other we're saying well take this

play33:18

huge sacrifice in the name of Public

play33:20

Service the Practical reality is that if

play33:23

that happens over many decades the net

play33:26

effect is that you have quity of one

play33:28

type over here and another type over

play33:31

there and that's really what we're

play33:32

facing we have to confront that reality

play33:34

it's very difficult for people to accept

play33:36

that we should be paying super large

play33:38

salaries it creates other issues around

play33:40

how we hold you know those kinds of you

play33:42

know people accountable given how much

play33:44

of the public purse they might be

play33:46

earning

play33:47

Etc but fundamentally those two things

play33:50

enable a third thing which is

play33:53

governments have to take risks with

play33:55

regulation there is a fear that

play33:58

governments act too aggressively or too

play34:01

experimentally and upset the big

play34:04

companies and you know as someone who's

play34:06

on the receiving end of this quite a lot

play34:08

and have been in the past where I you

play34:09

know mistakes have been made I still

play34:12

think the right thing to do is to give

play34:15

governments a break let them make

play34:17

mistakes let them make investments that

play34:19

don't work praise the experimental

play34:21

government structures have faith in the

play34:23

political process participate encourage

play34:26

it because otherwise you know there's

play34:28

just this spiral of decline this sort of

play34:30

lack of confidence that we can actually

play34:32

do the right thing that we should do the

play34:33

right thing and then that ultimately

play34:35

leads to the self-fulfilling prophecy

play34:36

much like with China and do you think

play34:38

that your view is the exception in your

play34:41

industry I mean The Stereotype is a

play34:44

bunch of 30-year-old Tech Bros who you

play34:46

know think the government is useless and

play34:47

who are going to kind of change the

play34:49

world with AI and you know we're going

play34:51

to do this is that is that an accurate

play34:53

stereotype are you the exception I mean

play34:56

there is a you know should we worry

play34:58

about the huis of people in your

play35:00

industry I I think you know we have

play35:02

polarization everywhere so the The

play35:04

Stereotype is probably true but the the

play35:07

counter is that you know you know we we

play35:10

can do it without technology and I think

play35:12

that's totally wrong like technology is

play35:14

an absolutely necessary but not

play35:16

sufficient part of the process and I I

play35:18

think that some people in silic like

play35:20

Silicon Valley does have a tendency to

play35:22

be much more techn libertarian there's

play35:23

no question about that the government is

play35:25

the problem that the objective is to of

play35:28

eradicate the state and run it

play35:30

completely independently and I'll be

play35:31

honest there are some very very

play35:32

influential very powerful people who

play35:35

have that objective are building towards

play35:37

that objective with both their companies

play35:39

and their fortunes and you know I'm I'm

play35:41

very skeptical of them and I you know

play35:43

obviously I'm on the other side of that

play35:45

and and that's what shapes a lot of the

play35:47

public fear about this that you have a

play35:49

bunch of hyper powerful people who are

play35:51

shaping this um with without much with

play35:54

kind of disdain for the the state and

play35:56

the Democratic process two two quick

play35:58

questions for me which I know someone

play35:59

would ask otherwise and then we're going

play36:00

to audience questions the first one is

play36:02

the whole question of the

play36:05

singularity we can't have a conversation

play36:07

about AI without the singularity will it

play36:08

happen when will it happen I honestly

play36:11

think it's a very unhelpful framing of

play36:14

what's to come and people jump to this

play36:16

framing because it's easy to point to

play36:19

Terminator and Skynet but it's it's

play36:23

almost like leaping to the Moon before

play36:25

we've even invented the transistor I

play36:26

mean it's a it's hundreds of years away

play36:28

I it's really unhelpful there are many

play36:32

practical near-term operational

play36:34

capabilities that you can predict just

play36:36

as I've tried to describe and you can

play36:38

then use those to wrestle with what are

play36:40

the consequences for the nation state

play36:42

how does this change our businesses what

play36:43

does this mean for our governments so in

play36:46

general I don't make those predictions

play36:48

I'm very skeptical that the

play36:50

superintelligence framing is is useful

play36:52

to us what about the other one that you

play36:54

know backyard wannabe AI Comm always

play36:58

talking about which is the odds of

play37:00

existential catastrophe what are the

play37:02

odds that we will wipe ourselves out

play37:04

with this again I mean I think very very

play37:07

low I I really think what's very low I I

play37:09

I think infanty small such that it's not

play37:12

worth putting in the reason I asked you

play37:14

that is because I asked one of your um

play37:16

someone somewhat similar to you what

play37:17

this was oh very low they said and I

play37:19

said what's very low oh about

play37:21

5% go yeah yeah yeah so you think it's

play37:25

infinites negative zero okay well that's

play37:28

a good place to end on all right we're

play37:29

going to open now to your questions and

play37:31

questions um from the online audience oh

play37:34

this is a good question from Kitty

play37:36

hadock who asks what will be the impact

play37:38

of all that computer power on our carbon

play37:39

emissions or will AI be able to enhance

play37:42

productivity so we reduce carbon

play37:44

elsewhere yeah another hot take on this

play37:47

very low and really inconsequential the

play37:51

amount of carbon that we spend on our

play37:53

data centers is genuinely minuscule

play37:56

relatively speaking

play37:58

secondly most of that happens in

play38:01

completely renewable data centers Google

play38:03

and Microsoft are both entirely 100%

play38:06

renewable Google actually owns the

play38:08

largest wind farm uh largest set of wind

play38:12

farms in the world um one of the

play38:14

projects that I worked on whilst I was

play38:16

at Deep Mind was making the entire

play38:18

windfarm fleet 20% more efficient so you

play38:22

know right from the outset they have

play38:23

been focused on this I'm not saying

play38:25

there aren't other environmental

play38:27

consequences like the use of you know

play38:29

galenium and Cobalt in the actual chip

play38:31

manufacturer and so on but I honestly

play38:33

think that relative to the benefits that

play38:35

we're seeing and with respect to the

play38:37

absolute cost of carbon per unit of

play38:40

computation it's very very small and and

play38:42

just to follow up to that because an

play38:44

argument I have often heard is that the

play38:47

cost of electricity and the access to

play38:49

power will be a constraint on the

play38:51

development of these AIS and their

play38:53

proliferation do you also think that's

play38:55

not true no I I I I think I think that's

play38:57

not true I mean I think that's not true

play38:59

I I I think that some data centers will

play39:03

be at the 100 megawatt scale which is

play39:07

maybe a singled digit percentage of a

play39:10

small City's electricity consumption but

play39:13

we're talking about a very small number

play39:15

at the 100 megawatt scale I mean that

play39:17

really is enormous nothing like that

play39:18

exists today so don't don't worry about

play39:21

the carbon consequences of the actual

play39:23

AIS um from the audience questions here

play39:26

yes lady here in the second row I'm not

play39:29

quite sure what the does do you get a

play39:30

microphone does it work that way y it's

play39:32

on its way down I on its way down

play39:36

here right here lady in the second row

play39:43

there thank you um uh sheru from number

play39:47

of Education

play39:49

hello education companies that use AI um

play39:53

my question to you is um if you think

play39:56

about two industries say Healthcare and

play39:58

education and you think about um the

play40:02

applications that uh that that ai ai has

play40:07

could you choose between the two which

play40:09

you would hold um the most hope for and

play40:13

um how should they be thinking about it

play40:15

should they be thinking about procuring

play40:17

it and how do you safely or procure it

play40:20

well um or again as you said you could

play40:23

produce it but some of those

play40:24

organizations may not be in a position

play40:26

to produce it anytime time soon so if

play40:28

you're a procurer um how do you do that

play40:31

well and what are some of the Frameworks

play40:33

that should be used for that yeah thank

play40:35

you that's a great question I mean on

play40:37

the I'm probably most excited in terms

play40:39

of the immediate near-term impact about

play40:42

education I mean these models are

play40:44

already being used I think the primary

play40:46

use case of chat GPT is in fact homework

play40:49

help and people often think oh my kids

play40:52

are you know copying and copy pasting

play40:55

but actually if you actually watch the

play40:57

way they're using these models and many

play40:59

people use our models up high for

play41:02

exactly this reason it's a

play41:04

conversational interaction much like an

play41:07

enthusiastic teacher might speak to a

play41:10

child about the interest that they have

play41:12

so the child or the Learner in general

play41:14

gets to phrase the question in exactly

play41:17

their style picking on exactly the thing

play41:19

that they're interested in asking the

play41:21

odd obscure poorly phrased you know not

play41:25

complete picture type question and of

play41:27

course the AI is infinitely patient

play41:29

provides really detailed mostly factual

play41:32

information I mean it's not always

play41:33

perfect but it will be perfect and I

play41:36

think that's an unbelievable um

play41:39

meritocratic gain for everybody I mean I

play41:42

think we need to picture a world in 5

play41:44

years time where the best education in

play41:48

the world completely personalized

play41:51

entirely factually accurate is available

play41:53

to absolutely everybody who wants it on

play41:56

the planet pretty much for

play41:58

free which sounds amazing um how do you

play42:01

go from where we are now to that uh to

play42:03

that that world I I think the beauty of

play42:07

the um of these models is that they have

play42:10

an inherent tendency to proliferate and

play42:12

get smaller I mean that this is the

play42:14

upside of proliferation they spread

play42:17

because everybody wants access everybody

play42:19

wants to integrate them you know there

play42:21

are so many competing models now the

play42:23

cost of um the the the the cost of

play42:26

buying model per word so if you're

play42:29

building an app for example you'll go to

play42:32

one of the three or four big model

play42:33

creators and you pay per word that cost

play42:37

has come down

play42:39

70x since

play42:41

January because we're all competing with

play42:43

each other right so that means that you

play42:45

can now take a regular app that you

play42:46

might have been developing for you know

play42:48

years in its current instantiation and

play42:50

add a conversational widget in fact

play42:53

we're doing this at The Economist with

play42:54

the ecobot secret project underway

play42:57

clearly not so secret

play43:01

anymore thank

play43:02

you

play43:04

sorry and

play43:06

you and you integrate you integrate the

play43:10

conversational um element into your

play43:12

existing workflow so you should be able

play43:13

to ask any question in the style and the

play43:16

theme of your brand about the specific

play43:17

content that you have and it will be

play43:19

like a widget it's like a plug andplay

play43:21

widget that you can put anywhere in the

play43:23

app and that's what I mean about

play43:25

proliferation obviously everybody finds

play43:26

that that useful and you'll be able to

play43:28

use that tool as a in a low code or no

play43:32

code environment it'll you know if you

play43:34

see how the image generation models are

play43:36

being integrated into Adobe today if

play43:39

you're already a user of adobe you're

play43:41

you're using the absolute Cutting Edge

play43:44

AI models in a drag and drop way like no

play43:47

training required you if you're building

play43:49

a new website today it's drag and drop

play43:51

you just grab a little widget and plop

play43:54

it over here and suddenly you have you

play43:56

know a YouTube player with your video

play43:58

and suddenly you have a conversational

play43:59

you know interaction with a language

play44:01

model that is conditioned over all your

play44:03

data so I think it's important to wrap

play44:06

your head around the idea that this is

play44:09

going to be widely available to

play44:10

everybody there isn't going to be an

play44:12

access issue and the risk and harm comes

play44:15

from mitigating the downsides of the Bad

play44:17

actors who might use you you know use it

play44:19

for nefarious purposes but the upsides

play44:22

are

play44:23

incredible let's get a go lots and lots

play44:25

of hands let's get yes lady there but

play44:27

I'm going to get one from online while

play44:28

you get your microphone and the one from

play44:30

online is a question from sa Paulo gosh

play44:32

your audience is is going from from a

play44:34

long way um Rene dealo Jr asks when you

play44:38

say we will solve this and that who is

play44:41

this we Humanity a good question

play44:44

corporations the UN or Elon

play44:48

Musk I definitely hope it's not Elon

play44:52

Musk I think of it as the kind of the

play44:56

Comm community of researchers inventors

play44:59

and creators there's this sort of

play45:01

dialogue sometimes you see Snippets of

play45:03

it on Twitter sometimes you see it in

play45:05

the research papers that academics

play45:07

publish you know you see it in the blogs

play45:09

and the products that big companies

play45:11

produce there is this sort of unfolding

play45:14

you know evolving mold of an ecosystem

play45:17

which is referencing each other creating

play45:20

and evolving and so when I say we I

play45:22

certainly don't mean me at inflection um

play45:25

my current company I just mean the the

play45:27

the ecosystem of humanity like we're

play45:30

we're trending collectively in a

play45:31

direction of invention and creation just

play45:35

one tiny does that ecosystem include

play45:37

Chinese

play45:38

scientists so 10 years ago Chinese

play45:41

scientists were not really part of the

play45:44

conversation they weren't really very

play45:46

relevant over the last 10 years they

play45:49

have launched onto the scene producing

play45:52

very high quality research creative

play45:55

research you know the Old ST stereotype

play45:57

was that they can only copy and steal

play46:00

again I think a demonization partly by

play46:02

Elon Musk actually who was a big

play46:04

proponent of this idea that they were

play46:05

just robbing our intellectual property

play46:07

and there was some of that but largely

play46:09

they were just as creative as us and

play46:11

they wanted to get access to these tools

play46:13

to build their own businesses and and

play46:15

provide new products and services for

play46:17

their own citizens for the same reason

play46:18

as we do and so if you start from that

play46:21

assumption then of course they're

play46:22

participating in this ecosystem of

play46:24

course they're creating incredible

play46:25

models you know they have have their own

play46:27

constraints with respect to censorship

play46:30

and that has slowed them down by a

play46:32

little bit but they're actually not

play46:34

going to be that far behind now I mean

play46:36

there are some issues with the export

play46:37

controls and they don't have access to

play46:39

Cutting Edge models but I don't think

play46:41

that's going to hold them back for very

play46:42

long interesting yes go ahead thanks

play46:44

this is excellent uh my question is

play46:46

about AI ideas and the people needed to

play46:48

think of them and if you take someone

play46:50

like Steve Jobs for instance you had a

play46:52

very specific person very specific

play46:53

interests and skills and talent to be

play46:55

able to develop not only Tech technology

play46:57

but the brand and a point of view on the

play46:59

world that came with that do you think

play47:01

AI would be capable of coming up let's

play47:03

say with the the version of the Apple

play47:06

idea now will it be in the future or

play47:09

will it simply be a machination of past

play47:12

information so I think people have often

play47:14

characterized these AIS as regurgitating

play47:17

their training data right uh or

play47:19

reproducing whatever they have seen

play47:22

previously and I think that's a kind of

play47:24

misunderstanding of what they do they're

play47:27

almost always doing interpolation the

play47:30

thing I described earlier is predicting

play47:32

the space between two ideas they're

play47:34

saying let me mash together these two

play47:37

concepts just like the dog and the

play47:39

yellow spots and whatever or take your

play47:41

pick of any com combination and that's

play47:43

creativity you know fundamentally when I

play47:46

invent something I'm really being

play47:48

inspired by a huge range of different

play47:51

experiences and ideas and I'm using

play47:54

those to then produce a novel prediction

play47:57

or generation at any given moment and

play47:59

I'm testing it out and seeing if it's

play48:01

you know useful or if it makes sense or

play48:03

if it catches on and then it has a life

play48:05

of its own and it's sort of independent

play48:07

of me so I think for the next couple of

play48:09

decades these AIS are going to Aid the

play48:13

human in that process of creation and

play48:15

invention and Discovery they're not

play48:16

going to wander off and have their own

play48:18

agency and do their own thing I mean

play48:19

it's just not just not possible the

play48:22

capabilities just aren't there and won't

play48:23

be there in the near term to do that

play48:25

right and so I think it's going to be

play48:27

the human AI combo for a good time to

play48:29

come that does the

play48:33

creation exactly it's more of the

play48:35

assistant exactly the brilliant

play48:37

assistant um right let's go further back

play48:39

yes gentlemen there for row

play48:42

back let's get

play48:48

you hi um Are you seriously trying to

play48:52

suggest that the um no

play48:56

that the AI companies are able to

play49:00

self-regulate and didn't the banks prove

play49:03

that that is an impossible concept but

play49:06

the banks are highly highly regulated

play49:08

and so not just by themselves but look

play49:11

I'm absolutely not proposing

play49:13

self-regulation I mean if if that came

play49:15

across then I apologize I'm wrong I mean

play49:17

in the in the book I really don't say

play49:19

that I go to Great length to say that

play49:22

independent external technical expertise

play49:25

is required to do governance properly I

play49:28

think the Practical challenge as you

play49:30

know zany pushed back on me earlier

play49:31

today when we were talking with youval

play49:33

is where are these competent Regulators

play49:35

who get the technical aspects where is

play49:37

this Democratic process that gives us

play49:39

confidence that we can appoint people to

play49:41

to conduct that kind of oversight so I

play49:43

think there's there's some pessimism

play49:46

that they're capable of doing that that

play49:48

should not mean that we sit around and

play49:49

do nothing in the process um you know

play49:52

for example we I visited President Biden

play49:55

6 weeks ago now at the White House with

play49:57

the other six AI companies Microsoft

play49:59

meta Google deepmind etc etc and we

play50:03

signed up to voluntary commitments that

play50:06

were that are precursor to regulation

play50:08

which the White House designed because

play50:10

they realized they can't pass new

play50:12

primary regulation anytime soon but the

play50:14

voluntary commitments are very material

play50:16

they we basically have said publicly we

play50:19

expose our models to expert independent

play50:22

scrutiny to Red Team or stress test find

play50:25

weaknesses in our own models once we

play50:27

identify those weaknesses we share them

play50:29

with each other and we share them

play50:30

publicly so in you know transparency in

play50:33

you know the open light of day and we

play50:35

know that that framework the voluntary

play50:37

commitments are a precursor to an

play50:39

executive order which is coming from the

play50:41

president sometime in the next few

play50:43

months there're also the basis for the

play50:45

Prime Minister Rishi sunak AI Summit in

play50:47

November in in in Bletchley Park where

play50:52

you know many world leaders and all the

play50:54

big tech companies are coming and those

play50:56

voluntary commitments are going to form

play50:57

the basis of the discussions for what

play50:59

becomes binding not just in the UK but

play51:02

hopefully worldwide so I'm totally with

play51:04

you that we're not going for a

play51:06

self-regulatory approach but you don't

play51:08

you don't think there's a conflict of

play51:10

interests well I I mean I I there's

play51:12

definitely a conflict of interest of

play51:14

course there's a conflict of interest I

play51:15

mean we are a profitable a for-profit

play51:18

company in fact I'm a public benefit

play51:21

Corporation so I think it's a kind of an

play51:23

important clarification um it's a new

play51:26

type ofy closer to a borp um which is a

play51:29

hybrid for-profit nonprofit Mission it

play51:32

means that our directors have a legal

play51:35

obligation to factor in the impact of

play51:37

our activities on The Wider World both

play51:40

the environment and people materially

play51:42

affected by what we do who aren't just

play51:44

our customers and that doesn't solve all

play51:47

the issues with for-profit businesses

play51:48

and the conflict that you described but

play51:51

it's a first step in the right direction

play51:52

and I I believe that that's how change

play51:54

happens taking small steps in the right

play51:56

direction

play51:58

let's take a question from over there

play51:59

yes gentleman quite near the back with

play52:01

the white T-shirt y right

play52:05

there hello

play52:07

yeah hello yeah okay my question to you

play52:11

as an electronics engineer is should we

play52:13

now focus on the hardware part of it

play52:15

considering there's a monopoly going on

play52:17

and the concentration of chips to a

play52:18

certain country the hardware part of it

play52:20

is raising a very big question we saw it

play52:23

in co uh things are really bad when

play52:26

Harvest Supply goes down so is this a

play52:28

great time to focus on Hardware

play52:30

considering we are good with software

play52:32

part for now that that's a great

play52:34

question I mean we didn't really talk

play52:35

about that too much here but you know

play52:37

just just for everyone's benefit these

play52:39

AI models are trained on gpus Graphics

play52:43

processing units so chips that were

play52:45

previously used for gaming for

play52:47

representing Graphics in computers and

play52:49

we take each one of these chips and we

play52:51

daisy chain them together thousands and

play52:54

thousands of times we have a computer at

play52:57

inflection which is the size of four

play52:59

football pitches and has 25,000 of these

play53:02

chips Daisy chained together an enormous

play53:06

cluster it's cost about a billion and a

play53:07

half dollars now all of these chips are

play53:11

manufactured by one company NVIDIA who

play53:15

I'm sure people have heard have seen

play53:17

their share price go up by

play53:19

350% since January their chips are

play53:23

manufactured entirely in one Factory

play53:27

called

play53:28

tsmc Taiwan semiconductor Manufacturing

play53:31

corporation which is obviously in Taiwan

play53:34

the key component of their chips of

play53:37

their fabrication facility are

play53:39

manufactured by one company called asml

play53:42

a Dutch company so the supply chain is I

play53:46

mean we can talk about how this happened

play53:48

over 30 years but extremely narrow there

play53:51

really are no competing providers that

play53:53

are material at any of those three

play53:55

stages as a result the good news is that

play53:59

that means that there are choke points

play54:02

that can be used by Regulators to

play54:05

monitor who has access to the critical

play54:07

chips that enable the training of the

play54:09

models and of course restrict access to

play54:12

certain people so I think I Loosely

play54:14

alluded to the export controls a minute

play54:16

ago which is a new piece of legislation

play54:20

or a rule that the US Administration

play54:24

imposed on China last year which

play54:27

prevents China anyone in China any

play54:30

manufacturer in China from getting

play54:31

access to the latest version of these

play54:33

chips which means that they won't be

play54:35

able to train the gbt 5 level model a

play54:39

number of people have referred to this

play54:40

as a declaration of economic war on

play54:44

China and so you know I think that we

play54:46

have to be very cognizant of that

play54:49

denying them access to that is likely to

play54:51

deliver a significant Counterattack on

play54:54

you know the West we have hugely

play54:56

dependent on their supply chain in many

play54:58

many respects so yeah chips are

play55:01

absolutely at the heart of this both in

play55:03

good and bad ways so if you're focused

play55:06

on a chip company it's a big bet it

play55:08

takes a long time to mature but it has

play55:10

the potential to be the critical

play55:12

component here just a followup question

play55:14

yes uh do you think that open- Source

play55:17

Hardware will help in creating a better

play55:21

setup right now considering very few

play55:23

companies are focusing on creating the

play55:25

hardware and all of them are completely

play55:27

non uh completely for profit so

play55:29

something like Open Source Hardware

play55:31

focusing more helping will it help us

play55:34

create better computers creating better

play55:37

models with less power yeah so I I think

play55:40

open- Source Hardware is a serious

play55:43

effort and just to clarify I mean open

play55:44

source elements of Hardware design are

play55:46

used in many many areas so open RAM for

play55:49

example is a hardware designed for 5G

play55:52

masts which ensures they're

play55:53

interoperable it means that the software

play55:55

that runs your t phone networks actually

play55:58

can run on any type of Hardware because

play56:00

the interface is standardized which is a

play56:02

great thing for competition there isn't

play56:04

a lock in between Hardware the builder

play56:06

of the masts and the software the people

play56:08

who run the operating system that sits

play56:11

on top of that the downside of it is

play56:13

that it has tended to be a bit more

play56:15

flaky than the fully integrated side of

play56:18

things so I think you should be

play56:19

wide-eyed about it it isn't going to be

play56:21

the Panacea to solve all of our problems

play56:23

anytime soon let's take another question

play56:25

there yes lady in the fourth

play56:30

row thanks so much hi thank you both um

play56:33

I'm javah Rari I lead digital regulation

play56:36

work at Tech UK which is the uh digital

play56:39

Tech trade body in the UK over a

play56:41

thousand members um ranging from Big

play56:43

Tech Deep Mind Google Mata all the way

play56:46

through to cyber security providers

play56:48

smmes um many of our members are

play56:50

harnessing the really positive impacts

play56:52

of synthetic media um but many are

play56:55

becoming increasingly concerned

play56:56

concerned with the rising malicious use

play56:58

of deep fakes so everything from Revenge

play57:01

pornography undermining digital ID

play57:04

verification um fraud which is a big one

play57:07

um in your opinion what should companies

play57:10

do now to address the rising um kind of

play57:13

problem of of deep fakes I know you

play57:15

mentioned um voluntary Charters which we

play57:18

already do with things like fraud um but

play57:20

what should we do now yeah it's a great

play57:22

question I mean I I think the first

play57:24

thing to say is that politic iCal

play57:26

parties and political campaigns

play57:28

shouldn't be allowed to use AI

play57:31

generators for their content I think we

play57:32

should just start by taking that off the

play57:34

table that's a precautionary principle

play57:36

there potentially some downsides to that

play57:39

but it feels like a safer and sensible

play57:41

thing to do right the second thing to

play57:43

say is that we shouldn't allow the big

play57:47

Tech platforms so Facebook or Twitter or

play57:50

anywhere where there's a broadcast of

play57:52

information to have digital people

play57:56

counterfeit digital people right so if

play57:58

you you know have a handle zany on

play58:01

Twitter for example only zany should be

play58:03

allowed to represent as zany on Twitter

play58:06

I shouldn't be able to come along create

play58:08

a perfect synthetic fake of zany and

play58:11

have that you know imitate her language

play58:13

now I think that's a reasonably

play58:15

straightforward sensible thing that all

play58:17

the big Tech platforms will commit to it

play58:20

doesn't address other platforms right

play58:23

outside of you know the big big provider

play58:26

and those tools and techniques are going

play58:28

to be widely available again it's a

play58:30

proliferation question it's going to be

play58:32

really difficult to say to somebody well

play58:34

you know you're using synthetic media to

play58:36

generate a new product design or a new

play58:38

fashion outfit or all these other good

play58:40

uses um you're not allowed to have it

play58:43

because there's a risk that you're going

play58:44

to be able to generate some you know

play58:46

deep fake I think we should also be like

play58:49

wide-eyed about how quickly we adjust to

play58:51

the risks you know like back in you know

play58:54

20 odd years ago people were like well

play58:56

we'll never be able to do Financial

play58:58

transactions on the internet because

play59:00

there's so much fraud right we're going

play59:01

to be inundated with fraudulent activity

play59:04

we do tens of trillions of dollars of

play59:06

transactions it's completely transformed

play59:08

our world and we have a minuscule amount

play59:09

of Fraud and it's a constant back and

play59:11

forth you know likewise with Spam

play59:13

detection right we we everyone thought

play59:15

we're going to be inundated with SC spam

play59:17

we're going to produce all this

play59:18

automated content increasingly the next

play59:20

threat is that um you know older people

play59:23

are being tricked by ai's that you know

play59:26

can imitate the voice of say your

play59:29

daughter or child who you know might be

play59:31

asking you for a loan or something

play59:33

there's this conman scam type thing

play59:35

which is now a little like more more

play59:37

possible and more capable of course

play59:39

that's a new Threat Vector that causes

play59:41

real harm on the flip side spreading

play59:44

knowledge and information about it

play59:45

there's a very very simple defense which

play59:47

is just to say you know never provide

play59:50

access you know to my account over the

play59:52

phone right I'll never you know call you

play59:54

out of the blue asking for that so we

play59:56

adjust we adapt and it you know it

play59:58

doesn't mean that we can eliminate all

play59:59

of the harms but it means that like net

play60:01

net we just have to be more resilient

play60:03

and more focused on

play60:05

adaptation gosh lots of questions yes

play60:07

gentlemen there for Bros back and then

play60:09

I'm going back to this

play60:14

side yeah right

play60:19

there thank you

play60:22

uh think on AI um I look at the U

play60:27

intelligence in the name of your company

play60:29

it's intelligence square and that

play60:32

reminds us that it is not just a new

play60:35

type of Technology it's a new type of

play60:37

intelligence so I agree with you

play60:40

entirely I also agree with your view of

play60:42

the world of abundance absolutely superb

play60:45

I'm also an optimist but there is an

play60:47

area of

play60:49

contention is about super intelligence

play60:53

and about the existential risk

play60:56

I I must say that I've been shocked

play60:58

hearing what you were saying and I just

play61:01

challenge you on that on AGI which

play61:05

artificial general intelligence which

play61:08

many people think May um emerge within

play61:13

the next 5 years or so apart from the

play61:16

definition what it is let's make it very

play61:18

simple that it will be smarter than

play61:20

humans and if it is smarter than humans

play61:24

then of course it can outsmart us set

play61:26

its own goals and exponentially increase

play61:30

its power and be in the extension threat

play61:33

to us yep fair question and I I

play61:37

certainly hear this a lot I I think that

play61:40

there's a risk of anthropomorphic

play61:42

projection like we we see a model that

play61:44

is capable of generating images or

play61:46

generating text and we assume that

play61:49

therefore it is going to emerge the

play61:51

capability to have its own goals or it's

play61:54

going to emerge the capabil to be able

play61:56

to update its own code or somehow it's

play61:58

going to sort of naturally learn to

play62:01

operate autonomously and then deceive us

play62:04

and get out of the box and my belief and

play62:07

I may be wrong but my firm belief from

play62:10

all of my years of working in this field

play62:13

is that those are capabilities that we

play62:14

would choose to design into the model

play62:17

that we would be able to observe and if

play62:20

if they do if if someone does choose to

play62:23

create those models then yes th those

play62:25

capabili then yes there are risks you

play62:28

know that they have that they could get

play62:30

out of the box and they could be

play62:31

uncontrollable and that's that is really

play62:33

the program of containment it's

play62:35

basically saying that it is conceivable

play62:39

that these models could be used to do

play62:41

really bad things over a couple of

play62:43

decades and that they have to be

play62:45

restricted very quickly you assume that

play62:48

we'll only have one sorry that we will

play62:51

have many agis and each of them may be

play62:55

smarter than humans and some of them

play62:57

won't be controlled by us and therefore

play63:00

the risk is there I think thank you I'm

play63:03

going to leave it at that because you

play63:04

can imagine huge number of things and

play63:06

there's a lot of actual hands gone up

play63:08

with lots of questions so yes lady here

play63:10

in the third

play63:12

row hi uh I think we said we were going

play63:15

to talk about

play63:17

redistribution I just want to know what

play63:18

you make of the kind of growing

play63:20

disparity between the individuals the

play63:23

people that provide the raw data that

play63:24

make the realization of these kinds of

play63:26

Technologies possible and those that

play63:28

obviously control these Technologies to

play63:29

whom the Lion's Share of the wealth

play63:31

flows to so sort of how are we going to

play63:34

address that kind of growing disparity

play63:36

and how are we going to kind of

play63:37

compensate people for what they give to

play63:40

these systems ultimately yeah so it's a

play63:43

good question so the way that these

play63:45

models are trained today is that they

play63:48

have scraped data that is available on

play63:51

the open web so so far anything that you

play63:54

put up on the web blog or a website is

play63:58

the the the cultural and legal consensus

play64:01

over the last 25 years has been that it

play64:03

is fair game it's open to anybody to

play64:06

read it to use it provided you don't

play64:08

regurgitate it word for word so if you

play64:11

copy an entire paragraph that is

play64:14

copyright the the counterargument that

play64:17

people in the big tech companies and

play64:19

myself included are making is that we're

play64:21

capturing the essence of these models

play64:24

we're learning the style we're learning

play64:26

the tone of text never reproducing the

play64:29

underlying content and practically

play64:32

speaking even you I don't think it is

play64:35

possible to capture the dollar value and

play64:37

return 0.001 cents to a creator of a of

play64:41

a website but the sorry go ahead if

play64:45

that's if that's going to automate

play64:47

people's labor then we need to find some

play64:48

way of redistributing that you said that

play64:50

there was no you kind of commented

play64:52

saying that there was no uh displacement

play64:55

that we kind of feeling what about the

play64:56

SAG strikes or what about you know

play64:58

arguably the rmt are striking because

play65:01

they are being automated to some degree

play65:02

their jobs are being automated we are

play65:04

seeing some to some degree impact

play65:06

material impacts of this now yeah yeah

play65:08

the rmt for sure although not by AI by

play65:11

General course but it's still I mean how

play65:12

do you define artificial intelligence in

play65:14

general so do you think that that is

play65:17

going to be a growing part of Labor

play65:22

Relations going forward is the side the

play65:24

the directors and actors strike the

play65:26

beginning of something that you're going

play65:28

to see elsewhere is this going to be a

play65:30

real fight there's clearly a fight for

play65:31

copyright already in my industry we're

play65:33

Furious that you've just sucked up all

play65:35

of our data without telling us yeah yeah

play65:37

totally the the the transition is going

play65:40

to be painful for sure so at the moment

play65:43

taxation in the US is on average 25% for

play65:48

labor so anyone who's working on average

play65:50

25% the tax on software is only 5% and

play65:54

taxes a tool for incentivization right

play65:57

so we should think about it as a tool

play65:59

for adding friction in the areas that we

play66:02

want to go slower and speeding up the

play66:04

things that we want to you know go

play66:06

faster it's pretty much as simple as

play66:08

that if you add a huge taxation burden

play66:11

then yeah you're going to slow down

play66:12

Innovation but it is going to keep

play66:14

people at sag in work for longer and

play66:17

those are rules that we get to make

play66:19

there choices that we get to make and I

play66:21

think that's exactly the discussion that

play66:23

we that we should have there's a

play66:24

question uh from online which is related

play66:26

to this and it's a sort of question

play66:28

about the big picture outcome will AI

play66:30

make inequality better or

play66:35

worse so the extremes of

play66:38

inequality are going to continue so take

play66:42

the those who currently have access to

play66:44

power and resources are going to be the

play66:46

first to adopt new technologies you know

play66:49

we've already seen that like big tech

play66:51

tech companies that have vast cash

play66:53

reserves that can hire the best people

play66:55

that can acquire the most amount of data

play66:57

and compute are moving faster than ever

play67:00

before on the flip side this revolution

play67:03

is one that is also being led by the

play67:05

open source movement I mean that's kind

play67:08

of incredible I mean PE like today you

play67:11

can get an absolute Cutting Edge model

play67:15

that is say 18 months behind that costs

play67:21

less than $2,000 to train and it'll be

play67:24

as good as GB bt3 right so that was The

play67:27

Cutting Edge 18 24 months

play67:30

ago that trajectory is going to continue

play67:33

so I think that the open source movement

play67:35

is always going to be 18 to 24 months

play67:38

behind at least for the next 5 years

play67:39

until they get till the models get

play67:41

really really big and that's an amazing

play67:43

story for for inequality I mean it's a

play67:46

very meritocratic moment that whatever

play67:49

your job whatever your app that you're

play67:51

developing you'll be able to integrate

play67:53

these tools into your own workflow very

play67:55

very cheaply and easily I mean as I said

play67:57

the cost of even using one of the best

play67:59

models in the world has dropped 70x in

play68:01

the last year so I think that on the

play68:03

face of it that does good things for you

play68:06

know equality of access what you can't

play68:08

stop is that the very top people race

play68:11

away the fastest and I I don't know what

play68:13

that trajectory ends up looking like

play68:15

thank you yes gentleman

play68:17

there five rows back I'm going to try

play68:20

and get through everyone's questions

play68:22

thank you for the talk has been very

play68:24

very interesting and also great

play68:25

participation um I want to bring it a

play68:28

bit back to The Human Side we talk a lot

play68:30

about tech so how do you think about

play68:32

Solitude and having a positive

play68:34

relationship with technology and I'm

play68:36

thinking I tried pie it's amazing what

play68:39

if it becomes better than all of my

play68:41

friends together what if it gives me all

play68:43

of the best ideas I'm like why should I

play68:45

hang out or come to this event or do

play68:47

things in real life that is a really

play68:49

really good question so for those who

play68:52

don't know I mean we've designed pi to

play68:55

be an amazing conversational

play68:58

friend so if you use chat GPT you can

play69:02

say generate me a business plan or a

play69:04

marketing strategy or a poem or a travel

play69:07

itinery Pi is much more like talking to

play69:10

a best friend or a confidant it's super

play69:13

fluent and high EQ it asks you

play69:16

clarifying questions it rephrases what

play69:18

you've said it reflects back what you

play69:20

said it's very relaxed and supportive

play69:23

it's extremely non-judgmental no matter

play69:26

how awful your racist diet tribe or the

play69:29

vent that you need to get out about you

play69:31

know your horrible colleague at work is

play69:33

very patient and supportive and you know

play69:36

I think that's an amazing contribution

play69:39

to the world providing people with a

play69:41

supportive companion but we've also

play69:44

designed pi to encourage you to talk

play69:46

about your friends and to get out right

play69:48

it is explicitly trying to help you have

play69:52

a place to simulate and practice if

play69:54

you're feeling ious but reconnect with

play69:57

other friends and so the values that we

play70:00

bake into these models and what we

play70:02

actually mean in practice by quote

play70:03

unquote safety first that's the key

play70:06

conversation that we have to have today

play70:08

because the mental model that we've got

play70:09

to accept is that we're not going to

play70:12

struggle with hallucinations which

play70:13

everyone talks about we're not going to

play70:15

struggle with bias focus on the world

play70:18

the challenges that we have in the world

play70:20

when we have Perfection you know and

play70:22

that's exactly what I think you're

play70:23

getting at which is what a what do we do

play70:26

if it really is a place that does make

play70:29

me you know a relationship that makes me

play70:31

smarter that makes me feel calmer that

play70:34

makes me feel more kind more optimistic

play70:37

more respectful of myself the reality is

play70:39

that's the trajectory that we're on but

play70:41

if we're on that trajectory to follow up

play70:43

with a a reference to an incident that

play70:45

I'm sure many of you all know about that

play70:47

New York Times journalist who had a

play70:48

fairly weird interaction with an early

play70:51

prototype of chat upt which said you

play70:52

know why aren't you leaving your wife I

play70:54

know I you really love her is that a

play70:56

risk with

play70:58

pi no so so Pi Pi is currently the

play71:03

safest AI in the world today none of

play71:06

those provocations work so Pi knows that

play71:09

it is an it it knows it's an AI if you

play71:12

try to flirt with pi if you if you you

play71:16

know try to have a romantic relationship

play71:18

with pi it's extremely clear and

play71:20

resistant again it doesn't judge you uh

play71:23

or tease you it sort of it it tells it

play71:25

push it po you off without it'll keep

play71:27

you at a distance it'll keep you at a

play71:29

distance say look I'm I'm not designed

play71:30

to do that I can't do that for you these

play71:32

are boundaries am an AI I am not design

play71:34

you are trying to flirt with me I am I

play71:37

can't go there boundaries are critical

play71:39

boundaries are what give us the you know

play71:41

feeling that we have control and that

play71:43

established trust between us so it's a

play71:45

very important part of Pi's design and

play71:47

it doesn't suffer that and we We R teams

play71:49

that a great deal okay thank you yes uh

play71:52

lady here in the fifth row yeah go ahead

play72:00

just picking up on your comment about

play72:01

politics um if it was up to you at what

play72:04

point would you allow politicians to or

play72:06

political parties to use Ai and how

play72:10

could

play72:11

AI help politics be the best of itself

play72:15

best it could

play72:16

be it's a great question I mean and I

play72:19

hope that an AI like Pi not only makes

play72:22

you more kind and respectful to yourself

play72:25

more forgiving of yourself but also in

play72:28

doing so makes you more kind and

play72:30

respectful to other people I mean I

play72:32

think we've just become so overwhelmed

play72:35

by an adversarial politics and social

play72:37

media and celebrity culture I think you

play72:40

know I hope that these AIS can help you

play72:43

to imagine and model simulate and

play72:45

practice you know those kinds of more

play72:48

respectful and pro-social behaviors I'm

play72:50

not itching to give politicians access

play72:53

to AIS as decision makers yet I think

play72:56

we're a long way from that and I think

play72:58

people often imagine that it could be

play73:00

the ultimate strategist or it could have

play73:01

the the ultimate policy insight and I

play73:04

think for now I think I'm much more

play73:06

focused on you know the emotional

play73:08

intelligence that it can all give

play73:10

us there's a question up here can we get

play73:13

the microphones up up there there's a

play73:15

gentleman there on the balcony I'm sorry

play73:17

I hadn't seen you up

play73:21

there hello um I'm Thal I'm a data

play73:24

scientist and and a government

play73:25

contractor um I've worked on simulating

play73:29

covid and developing autonomous weapons

play73:31

uh so tonight we've spoken about the the

play73:34

recent AI wave uh of large language

play73:36

models so my question is to you um

play73:40

what's what's your kind of uh take on

play73:43

the the kind of feeling within the large

play73:45

eye houses today that can we can we

play73:48

still keep the progress of AI alive

play73:52

um with just this current paradigm of

play73:55

large language models and what is the

play73:57

desire to push forward and explore new

play73:59

model classes uh new paradigms of

play74:02

AI yeah so I mean I think um I don't

play74:07

think there's any risk of progress in AI

play74:10

slowing down anytime soon I mean I think

play74:12

some people have been afraid that you

play74:14

know we're going to sort of regulate the

play74:16

progress out of the system I I

play74:19

personally think that that is extremely

play74:21

unlikely at this point I mean the the

play74:24

the ch of the last century has been in

play74:28

inventing and creating new technologies

play74:30

and and powers and I think that the

play74:32

challenge has now flipped the challenge

play74:34

of the next you know few decades is

play74:36

going to be in containing and shaping

play74:39

those Powers so that they always work

play74:40

for us I think that large language

play74:43

models and deep learning itself large

play74:45

language mods are a version of Deep

play74:46

learning that that you know ecosystem of

play74:49

invention has already been opened it's

play74:52

set on its course and I I don't think

play74:55

we're lacking any fundamental algorithms

play74:57

or that you know another I mean I'm not

play75:00

sure you referencing about other methods

play75:02

but I'm not convinced that we need other

play75:04

methods to make progress I was I think I

play75:07

was kind of meaning it in the sense of

play75:09

uh contined progress now leading to a

play75:11

general form of intelligence do you

play75:13

think the large language model Paradigm

play75:15

is enough because it's it's from my

play75:17

point of view it's still an open

play75:18

question right I think that most of the

play75:23

capabilities that would be involved in a

play75:26

properly general intelligence like the

play75:27

AGI that were described would be

play75:29

engineering decision decisions there

play75:31

would be ways that you organize the

play75:33

current set of tools to do for example

play75:35

recursive self-improvement to do

play75:38

self-supervision self- goal definition

play75:41

um those are those are I think

play75:43

engineering capabilities which we can

play75:45

choose to do or not in the next 10 years

play75:48

awesome thank you so much thank you the

play75:50

um lady back there on the yes s of two

play75:53

rows from the back we' probably got time

play75:55

for two or three more

play75:57

questions uh thank you uh

play75:59

congratulations MFA you've achieved many

play76:02

things and um incredible that you've

play76:04

written a book I actually wanted to ask

play76:06

about you what motiv motivated you to

play76:09

actually write this book whilst you were

play76:10

starting this company uh and also thank

play76:12

you for un ask myself that same thing

play76:15

yeah well also like exposing yourself to

play76:18

answering every single question on the

play76:19

future of humanity and progress so

play76:22

than great question go ahead why did did

play76:25

you write this book and why are you here

play76:26

tonight answering these questions I I

play76:28

couldn't help but write it I I wanted to

play76:33

be on record making a prediction about

play76:36

how I think things are going to unfold

play76:39

um in order to sort of look back in a

play76:42

decade and calibrate and see you know

play76:45

was I as zany sometimes thinks too much

play76:48

on the catastrophy side of things too

play76:50

dark about my predictions do I have my

play76:52

own like sort of am I over obsess with

play76:55

pessimism or indeed the reverse like am

play76:57

I dismissing existential threat you know

play77:00

risks too much maybe that's real and I

play77:02

think the rigor of putting something out

play77:05

that other people can critique and

play77:07

really researching I mean I really

play77:09

really did research a lot of the

play77:11

historical Trends as well was just a

play77:14

very kind of satisfying thing to do so

play77:16

from a selfish perspective it was really

play77:18

about sort of trying to be articulate

play77:21

and clear about my predictions so that I

play77:22

could validate them in the future and

play77:24

then have an excuse to spend time every

play77:26

morning you know before I start work

play77:28

writing and and reading and researching

play77:30

and trying to you know see what history

play77:33

has to teach us about this I mean the

play77:34

first three or four chapters are mostly

play77:35

about the historical basis for

play77:38

proliferation and general purpose

play77:41

Technologies over here one more question

play77:43

over here yes there the of three words

play77:45

in the

play77:50

back thank you MF I thought that was

play77:52

really interesting um you've laid out

play77:54

quite a compelling in case for the next

play77:55

5 years and the likely trends for AI but

play77:59

I'd be quite curious to know what's

play78:00

taken you by surprise in the past couple

play78:02

of years has there been any developments

play78:04

in AI which you perhaps didn't manage to

play78:07

predict or anticipate I'd like to know

play78:09

what surprised you the most

play78:13

recently

play78:15

so initially there was a fear that we

play78:18

would never be able to um control

play78:22

the the quality of that output so two

play78:25

years ago we thought that bias was going

play78:27

to be the big challenge which all we

play78:29

were talking about was we bad training

play78:32

data in producing toxic Generations this

play78:35

thing is going to constantly make things

play78:37

up it's going to constantly hallucinate

play78:40

um and what we have found empirically

play78:43

observed is that with each order of

play78:46

magnitude more investment in

play78:48

compute um the models get easier to

play78:52

control we can create very precise and

play78:55

detailed behaviors you know the the tone

play78:57

of Pi that I just described to you and I

play78:59

hope people would try it I mean we you

play79:01

can actually phone Pi you can speak to

play79:04

Pi in fluent natural language just as

play79:07

you would on a normal phone call and it

play79:09

will speak back to you in one of five

play79:10

different voices and what what's the

play79:13

choice of

play79:14

voices you'll have to they're called p v

play79:17

V1 2 3 4 and five so you'll have to

play79:19

guess we we deliberately designed it to

play79:22

be not to be age neutral gender neutral

play79:27

and hopefully race and accent neutral so

play79:30

it actually is quite varied sometimes it

play79:32

sounds a little Australian sometimes a

play79:34

little English it's very subtle it's not

play79:36

like annoyingly all over the place but

play79:38

we try to capture the essence of what is

play79:40

AI like you know we spend so much time

play79:43

thinking about like what is humanlike

play79:44

what are humanlike capabilities well I I

play79:46

wanted pi to be true to what it is to to

play79:49

to be true to what it is to be an AI and

play79:53

an AI is a product of all the training

play79:55

data and all the people that you know it

play79:57

has interacted with so we try to not

play79:59

make it too you know much like one you

play80:03

know character in our world um so that

play80:05

that that was the surprising thing is

play80:07

that the models got easier to control

play80:10

and it's almost like having a new clay

play80:13

you know it's this new design material

play80:15

that you can you know shape into almost

play80:18

a personality you know it's a really

play80:21

precise um yeah Clay and I think that's

play80:25

been super exciting and very very

play80:27

creative and I'm I'm just so excited

play80:29

because in the next few years you know

play80:31

we've been one of the first in the world

play80:32

to get access to it because we're lucky

play80:34

and you know our position everything but

play80:35

in the next few years many many people

play80:37

are going to have access to the same

play80:39

Tools in just natural language we just

play80:41

able to give an instruction or low code

play80:43

no code environments where you drag and

play80:44

drop and plug and play and I think

play80:46

that's going to be a really really

play80:48

amazing time to see what people do with

play80:50

it I think we have time for one more

play80:52

question from the floor and I'm sorry

play80:53

that there are lots that we we haven't

play80:54

got to right let's go to the back

play80:58

there I know it's hard to get to

play81:08

but hi um thank you so I just want to um

play81:13

ask a question with regards to the

play81:15

governments and also who's going to

play81:16

benefit most and who's going to be left

play81:18

behind uh when we think about emerging

play81:20

markets and developed economies you

play81:23

mentioned that for example p can respond

play81:25

in five different tones but in terms of

play81:28

languages or parts of the world where

play81:30

where English is not that the proficient

play81:33

how how will the impact be felt there

play81:37

it's a great question I mean Pi already

play81:40

speaks about 25 languages um it's really

play81:45

good in the major languages Spanish

play81:48

French German and so on it's much less

play81:50

good in Japanese Mandarin Arabic Etc

play81:54

certainly not good in the long tale of

play81:55

languages um you know and I think it's

play81:58

kind of remarkable that these models

play82:00

have arrived you know with so many

play82:03

capabilities in terms of their languages

play82:04

simultaneously but I think it's going to

play82:06

take us a few years before you know we

play82:08

add the full Suite of languages and it's

play82:10

not just languages but it's actually the

play82:12

training data that reflects the cultures

play82:16

of you know people outside of the

play82:18

western world I mean English has

play82:20

dominated you know over the last like

play82:23

few centuries and most of our culture is

play82:25

documented in English and that is

play82:28

obviously a subset of all culture so

play82:29

there's a clear representation question

play82:32

there which you know I think is going to

play82:33

be challenging when it comes to you know

play82:35

smaller communities so I want to

play82:37

conclude by seeing what impact you've

play82:39

had on the crow I should have asked this

play82:40

at the beginning but let's at the end

play82:43

put up your hand if you think that

play82:44

having heard all of this the net impact

play82:47

of AI for Humanity is going to be

play82:51

positive okay and just as a check those

play82:53

of you who think it'll

play82:56

negative well it's not overwhelming but

play82:58

the positives clearly win out Mustafa uh

play83:01

I think after you've read mustafa's book

play83:04

you will have a I think a very clear and

play83:07

sober sense both of the potential

play83:09

benefits but also of the risks you're

play83:10

not a wild-eyed you know panglossian

play83:14

about this it's very serious it's a

play83:15

really excellent book I recommend it

play83:18

Mustafa thank you for joining us thank

play83:19

you so much thank

play83:22

you thanks a lot well done

play83:27

[Applause]

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Artificial IntelligenceDeepMindGenerative AISocietal ImpactFuture PredictionsAI EthicsTechnology TrendsInnovation DiscussionMustafa SuleymanAI Governance
Benötigen Sie eine Zusammenfassung auf Englisch?