Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes
Summary
TLDRIn a thought-provoking discussion, Mustafa Suleyman, co-founder of DeepMind and Inflection AI, explores the transformative impact of generative AI models like Chat GPT on society. He delves into AI's potential to revolutionize various sectors, the ethical considerations of data training, and the importance of governance in steering AI's trajectory. Suleyman also addresses the existential risks of AI, advocating for a balanced view that acknowledges its benefits and challenges, while emphasizing the need for proactive and informed regulation to ensure AI's positive impact on humanity.
Takeaways
- 📚 Mustafa Suleyman, co-founder of DeepMind and Inflection AI, discusses the transformative potential of AI in his book 'The Coming Wave', emphasizing its ability to change various aspects of life and society.
- 🤖 AI's generative capabilities are leading a revolution where models can produce new content, such as images, text, and music, which is a significant leap from the previous focus on classification tasks.
- 🚀 The advancement in AI is unprecedented, with computational power for cutting-edge models growing by 10x each year,预示着未来几年内AI的能力将实现千倍增长, potentially leading to AI that can plan and execute complex tasks.
- 🌐 There is a global impact and interest in AI, with many concerned about the downsides, such as existential risks, while others, like Suleyman, offer a compelling view of the positive potential of AI.
- 🧩 The development of personal AI, like Inflection AI's 'Pi', aims to provide individuals with their own AI assistant, capable of organization, planning, and support, almost like a 'chief of staff'.
- 🏛️ Suleyman calls for robust governance and oversight of AI, including the presence of technical experts in government and the willingness to experiment with regulation to ensure safety and address risks.
- 🌍 The geopolitical landscape and tensions between major powers like the US and China pose challenges to achieving global governance structures for AI, but Suleyman stresses the importance of not demonizing other nations or engaging in a race to the bottom on values.
- 🛠️ The hardware component of AI, particularly GPUs, is a critical area that requires attention due to the monopoly and concentration of chip manufacturing, which could affect the development and proliferation of AI.
- 💡 Suleyman highlights the importance of creativity and innovation in AI, stating that models are not just regurgitating information but are capable of novel predictions and interpolations between concepts.
- 🔮 While there is much debate about the singularity and existential risks of AI, Suleyman is skeptical of the singularity framing and believes existential risks are very low, focusing instead on the practical near-term capabilities and governance of AI.
Q & A
What is the main topic of discussion in the transcript?
-The main topic of discussion is the impact of artificial intelligence, particularly generative AI models like Chat GPT, on the world and the future implications as presented in Mustafa's book 'The Coming Wave'.
Who is Mustafa and what are his credentials in the AI field?
-Mustafa is the co-founder of Deep Mind and Inflection AI. He has considerable credibility in the AI field, having established two successful AI companies and contributing to the development of AI technology.
What is the potential of AI in transforming various sectors according to Mustafa?
-According to Mustafa, AI has the potential to bring about massive efficiencies and innovation in various sectors such as agriculture, healthcare, education, and transportation, leading to an era of radical abundance.
What are some of the risks and downsides associated with AI that are discussed in the transcript?
-Some of the risks and downsides include the potential for AI to be used for harmful purposes, the impact on jobs due to automation, the spread of misinformation through deep fakes, and the challenges to liberal democracy and governance.
What is the 'lump of labor fallacy' mentioned in the transcript?
-The 'lump of labor fallacy' is the belief that there is a fixed amount of work to be done, and with automation, there will be fewer jobs available for humans. Mustafa argues that history has shown that new jobs and roles are created as old ones become automated.
What is Mustafa's view on the role of government in managing the impact of AI?
-Mustafa believes that governments should have technical and engineering expertise, take risks with regulation, and be involved in the creation of technology to deeply understand and control it effectively.
What is the significance of the 'voluntary commitments' mentioned in the transcript?
-The voluntary commitments are a set of guidelines that AI companies have agreed to follow, which include exposing their models to independent scrutiny and sharing weaknesses publicly. These commitments are a precursor to future regulation and are meant to ensure safety and ethical use of AI.
How does Mustafa address the concern about AI and existential risks?
-Mustafa considers the risk of AI leading to existential catastrophe to be very low. He emphasizes that the focus should be on the practical near-term capabilities of AI and the consequences for society and nation-states.
What is the 'Pi' AI that Mustafa mentions in the transcript?
-Pi is a personal intelligence AI developed by Mustafa's company, Inflection AI. It is designed to be a conversational partner with high emotional intelligence, providing support and assistance to users in a personalized manner.
What is the potential impact of AI on climate change and environmental issues?
-AI has the potential to help address climate change by optimizing industrial systems for greater efficiency, aiding in the development of more resilient crops, and contributing to the invention of new solutions to environmental problems.
Outlines
📚 Introduction to AI and the Impact of Chat GPT
The speaker opens the discussion by highlighting the significance of AI, particularly generative AI models like Chat GPT, which has garnered widespread attention since its release in November last year. The audience's familiarity with Chat GPT is assessed through a show of hands, indicating its prevalence. The speaker introduces Mustafa, a co-founder of Deep Mind and Inflection AI, who has written a compelling book on the future of AI. Mustafa's background in philosophy and theology at Oxford is mentioned, along with his transition to the tech industry and his contributions to the field of AI.
🤖 Mustafa's Journey from Philosophy to AI Entrepreneur
Mustafa shares his personal journey, starting with his studies in philosophy and theology at Oxford, his dropout to pursue a greater impact in the world, and his work with a charity to establish a telephone counseling service. He discusses his shift from non-profit work to the realization of the potential of technology, inspired by the rapid growth of Facebook. Mustafa's quest to learn about technology led him to various business ventures, including an unsuccessful attempt at providing Wi-Fi infrastructure for restaurants. His eventual partnership with Demis Hassabis, co-founder of Deep Mind, is highlighted, along with their ambitious goal to create AI capable of replicating or even surpassing human intelligence.
🚀 Deep Mind's Early Days and the Bet on Deep Learning
Mustafa reflects on the early days of Deep Mind, set in 2010, when the founders had a vision of creating an AI that could match human intelligence. The speaker emphasizes the significant bet they placed on deep learning, a technology that was not yet widely adopted. Mustafa mentions that some of the key figures in the AI industry, including the 'Godfather of AI' Jeffrey Hinton, were involved with Deep Mind in its early stages. The speaker also notes the importance of timing and being ahead of the curve in the AI revolution.
🌐 The Generative Revolution and Future Predictions
The speaker discusses the transition from the classification revolution to the generative revolution in AI, where models are now capable of producing new content rather than just classifying existing data. Mustafa predicts that in the next 5 years, AI will reach human-level capability across various tasks, leading to significant changes in innovation and management efficiency. He describes a future where AI can plan across multiple time horizons, from generating new product ideas to researching, manufacturing, and marketing them.
🌱 The Positive Impact of AI on Society and the Environment
Mustafa outlines the potential positive impacts of AI, such as its ability to contribute to solving climate change, improving healthcare, and increasing efficiency in various sectors. He emphasizes that intelligence has been the driving force behind human creation and innovation, and AI is an extension of that, capable of discovering new knowledge and inventing solutions to problems. Mustafa envisions a future of radical abundance, where AI serves as a scientific advisor, research assistant, tutor, coach, and confidant for everyone.
🔒 The Risks and Containment of AI Technologies
The speaker shifts the focus to the potential risks of AI, including the possibility of AI models providing information on harmful activities, such as manufacturing biological and chemical weapons. Mustafa discusses the challenges of controlling AI models, especially open-source ones, and the potential consequences of powerful AI falling into the wrong hands. He also addresses the relationship between large tech companies and nation-states in overseeing AI development and ensuring accountability.
🏛️ The Future of Work and the Role of Government in AI Oversight
Mustafa and the speaker debate the future of jobs in the context of AI, with Mustafa arguing against the idea of mass unemployment due to AI. He discusses the historical trend of job creation following automation and the potential for AI to解放people from the obligation to work, leading to a focus on well-being and prosperity. The conversation then turns to the role of government in AI oversight, with Mustafa advocating for governments to build technology, employ technical experts, and take risks with regulation to ensure the safe and beneficial development of AI.
🌐 Geopolitical Tensions and the Global Governance of AI
The discussion moves to the geopolitical implications of AI, with the speaker raising concerns about the tensions between the US and China and the race for global dominance. Mustafa emphasizes the importance of not demonizing China and focusing on the actions they are taking, which are driven by self-preservation instincts. He also stresses the need for good governance and oversight, mentioning the EU AI Act as an example of robust regulation and the importance of not engaging in a race to the bottom on values.
🛠️ The Importance of Hardware in AI Development
The speaker and Mustafa discuss the importance of hardware in AI, particularly GPUs, and the current monopoly on chip manufacturing. Mustafa explains the narrow supply chain and the potential implications for regulation and access to critical chips. He also touches on the potential for open-source hardware to contribute to the development of AI, while acknowledging the challenges and limitations associated with it.
🌍 Global Participation in AI Development
Mustafa addresses the global nature of AI development, noting the significant contributions from Chinese scientists and the importance of including them in the conversation. He refutes the stereotype of Chinese scientists as mere copiers, highlighting their creativity and desire to build their own businesses and services using AI technologies.
💡 AI as a Tool for Creativity and Innovation
The speaker explores the potential of AI to contribute to creativity and innovation, questioning whether AI could develop ideas like the concept of Apple. Mustafa explains that AI models are capable of interpolation, combining existing ideas to create novel predictions. He believes AI will aid human creativity for the foreseeable future, rather than operating independently.
🏦 The Role of Regulation and Self-Governance in AI
Mustafa discusses the importance of independent external technical expertise in governing AI and the challenges of finding competent regulators. He mentions the voluntary commitments made by AI companies, including transparency and sharing of weaknesses, as a precursor to future regulation. The speaker raises concerns about conflicts of interest, to which Mustafa acknowledges the inherent conflicts in for-profit companies but emphasizes the steps taken to address them.
🌐 The Global Impact of AI on Inequality and Culture
The conversation concludes with a discussion on the impact of AI on inequality and culture, particularly the representation of non-English speaking communities. Mustafa acknowledges the current limitations of AI in non-major languages and the importance of reflecting diverse cultures in training data. He also addresses the potential for AI to exacerbate existing inequalities, while also providing tools for broader access and participation.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Generative AI
💡Deep Mind
💡Inference AI
💡Personal Intelligence (Pi)
💡Existential Risk
💡Computational Power
💡Open Source AI
💡Regulation
💡Redistribution
💡Meritocracy
Highlights
Chat GPT's release in November last year marked a global realization of generative AI's potential to change the world.
Mustafa Suleyman, co-founder of DeepMind and Inflection AI, offers a compelling look at AI's future in his book, 'The Coming Wave'.
Suleyman's background in philosophy and theology at Oxford led to a unique approach to founding tech companies.
DeepMind's early focus on deep learning positioned it ahead of the AI curve, attracting notable figures in the field.
Suleyman predicts AI will reach human-level capability across various tasks within the next 3 to 5 years, leading to significant societal shifts.
AI's generative revolution is enabling models to produce novel content, unlike the classification tasks of the past decade.
The exponential growth in computational power dedicated to AI models is unprecedented in technology history.
Suleyman envisions a future where personal AI assistants function as chiefs of staff, prioritizing and supporting individuals.
AI has the potential to solve complex issues like climate change and improve healthcare through increased efficiency and invention.
Suleyman addresses the risks of AI proliferation, including the empowerment of harmful actors and the challenges of containment.
The debate on AI's impact on jobs reflects a historical pattern, but the future may see a shift towards less work and more leisure.
Suleyman argues for a focus on redistribution of wealth and a reevaluation of work's role in society as AI advances.
The rise of AI may challenge liberal democracy, with concerns about deep fakes and misinformation in the political sphere.
Suleyman calls for robust governance and oversight to ensure AI technologies are developed and used responsibly.
The potential environmental impact of AI's energy consumption is mitigated by the move towards renewable energy in data centers.
Suleyman discusses the importance of education and the role of AI in democratizing access to high-quality learning resources.
The future of AI governance includes voluntary commitments from tech companies and potential regulation to ensure safety.
Transcripts
hello everybody it's great to see my
gosh so many friends actually and indeed
my husband which is a bit alarming he
never turns up to any event I do uh but
um it is great to be here uh to talk
with about literally one of the hottest
topics of the moment with someone who
has written one of the best books about
it um how many of you have used chat GPT
just a show of
hands virtually everybody I think you'd
probably then agree that chat GPT W came
November last year and it was only then
that most people realized that
artificial intelligence generative AI
models in particular were about to
change the world and suddenly there was
a kind of collective Global oh my God
this capability is extraordinary and
it's been reflected in endless numbers
of editorials hand ringing politicians
and I think I'm right in saying the main
focus has been on the downsides everyone
has their pet view of what the odds are
of existential risk are we all going to
kill ourselves it's all terrible and
Mustafa comes into this as a man with
considerable credibility he is a man who
has co-founded not just one but two
successful AI companies uh and he's a
man who in this book takes a
sober realistic and actually very
compelling look at what lies ahead of us
and so that's why you really should read
it it's great I've read it twice uh you
should read it Mustafa just to give you
some he doesn't need much introduction I
don't think I think to this group but he
was a co-founder of Deep Mind back in
2010 uh he then was a co-founder of
inflection AI with Reed Hoffman Reed
Hoffman his co-founder has with the help
of J chat GT Jack GPT written an
extremely upbeat view of the potential
of this technology so I'd love to know
the debates between the two of you um he
was got a CBE a few years ago for his
Visionary services and influence in the
UK technology sector um he is also on
the board of The Economist so I get to
see Mustafa working up close um uh he's
a friend of The Economist friend of uh
and and great figure in British
technology but I think and the place to
start with this book and the book is
called the coming wave and you will know
that there has been if you've turned on
your TV or listen to a podcast recently
you will know that never mind the coming
wave there is already a wave of um
publicity and people being impressed
with this book I believe you've had you
told me 60 appearances of various sorts
so consider yourselves lucky or 61 on
this list uh but understandably the work
the book has had a tremendous impact
because it is very interesting very
thoughtful and it's on the hottest topic
of the moment so we want to talk most of
the time about the book but I do want to
for those of you who don't know Mustafa
to get a little bit of background and
the first is that Mustafa is actually
not a computer geek you didn't study
computer code right you studied
philosophy and theology at Oxford so can
you just give us the kind of poted
history about how a man who studied
philosophy and theology comes to be the
co-founder of two tech companies what
are you
doing well I've always found philosophy
a systems thinking tool it's enables me
to be rigorous and clear about what I
think and you know right from the very
outset I think when I was 19 I actually
dropped out of My Philosophy degree no I
didn't know that yeah I didn't finish
and I was really motivated by the impact
that I could have in the world I left to
help start a charity um at the time it
was a telephone Counseling Service um
called Muslim youth helpline and it was
a secular I was an atheist even though I
had grown up uh with a Muslim background
uh it was a secular service that was
designed to provide faith and culturally
sensitive um support to Young British
Muslims this was in
2003 and you know I I found myself at
Oxford studying this very theoretical
esoteric you know set of ideas and I
wanted to put real things into practice
in terms of my ethics and that was why I
went to you know start the helpl line
and worked on that as a volunteer for 3
years uh I soon got you know frustrated
about the scale of impact um in our
nonprofit uh and I worked briefly for
the mayor of London at the time Ken
Livingston um as a human rights policy
officer um and you know that was that
was inspiring but I was also struggling
with the scale of impact I I I realized
that you know if if I didn't capture
what what really makes us organized and
effective as a species The Profit
incentive then I was going to miss one
of the most important things to happen
in my lifetime and um at the time I saw
the rise of Facebook this was sort of
around 20072 2008 and it had grown in
the space of two years to 100 million
monthly active users and I was totally
blown away at how quickly this was
growing out of seemingly nowhere
something completely new to me and so I
set about on a quest to find anyone and
everyone that would speak to me to teach
me about technology I had started a
bunch of businesses before that two
different businesses one actually a
technology company selling electronic
point of sale systems actually around
here in Notting Hill uh in restaurants
uh trying to put Wi-Fi infrastructure in
there and so on that was a that was
unsuccessful that was ahead of its time
um and so I was looking for people who I
could you know form a new partnership
with and figure out how to take
advantage of of of Technology uh and
that's where I met my friend and
co-founder of deep mind emis aabus
because he was the brother of my best
friend at the time from school um and he
was just finishing his PhD uh in
neuroscience at UCL and we got together
and you know the rest is history and at
that time you know back in 2010 you had
between you and there was another
co-founder right Shane Lake the three of
you had the ambition that you were going
to create an artificial intelligence
that was you know capable of replicating
human intelligence or even succeeding it
so just just think this was 13 years ago
the rest of us didn't even know this
stuff was really going on you're you're
in your where is it in region Square
somewhere did you imagine that by 2023
the world would have what we have now I
mean in a way yes it was difficult for
us to imagine exactly how it would
unfold but we made a very big bet on
deep learning uh which is one of the
primary tools that is powering this new
Revolution um before anybody was
involved in deep learning so the the
current chief scientist and co-founder
of open aai the creators of chat GPT was
one of our interns uh back in 2011
Jeffrey Hinton who was the who
subsequently became the um one of the
heads of AI at Google and is known now
as the Godfather of AI recently in the
Press worried about the consequences he
was our first advisor our paid advisor I
think his salary was £25,000 a year to
us so I think three of the six
co-founders of open AI at some point
passed through deep mind either to give
talks or were actually members of the
team so really is incredibly about
timing you know we got the timing
absolutely right we were way ahead of
the curve at that moment and somehow we
managed to hang on so you you were there
for a while and then let's fast forward
a bit um you can read the rest of this
in the book you you now have co-founded
and run inflection Ai and you are
creating an AI called Pi which you can
interact with if you'd like tell us what
pi does so Pi stands for personal
intelligence and I believe that over the
next few years everybody is going to
have their own personal AI there are
going to be hundreds of thousands of AIS
in the world they'll represent
businesses they'll represent Brands
every government will have its own AI
every nonprofit every musician artist
record label everything that is now
represented by a website or an app is
soon going to be represented by an
interactive conversational intelligence
service that represents the B brand
values and the ideas of whatever
organization is out there and we believe
that at the same time everybody will
want their own personal AI one that is
on your side in your corner helping you
to be more organized helping you to make
sense of the world um it really is going
to function as almost like a chief of
staff all you know prioritizing planning
teaching
supporting supporting you so that sounds
great um what does it actually mean
though in practice because so often this
conversation about AI it's at this point
then it turns into the apocalyptic we're
going to end up you know wiping
ourselves out because there'll be some
Rogue person you know sitting in a
garage somewhere who will you know
unleash a virus that will kill us all so
before we get to all of that stuff in
let's say say I don't know 5 years I
you've said within the next 3 to 5 years
you think AI will reach human level
capability across a variety of tasks
perhaps not everything but a variety so
paint a picture for us of what life will
be like in five years at 2028 I first of
all will it be you and me here or will
there be the kind of Mustafa Ai and the
bot okay let me let me just go back 10
years just to to give you a sense for
what has already happened and why the
predictions that I'll make I think are
plausible so the Deep learning
Revolution enabled us to make sense of
raw messy data so we could use AIS to
interpret the content of images classify
whether an image contains dogs or cats
what those pixels actually mean we can
use it to understand speech so when you
dictate into your phone and it
transcribes it and Records perfect text
we can use it to do language translation
all of these are classification tasks
we're essentially teaching the models to
understand the messy complicated world
of raw input data well enough to
understand the objects inside that data
that was the classification Revolution
the first 10 years now we're in the
generative Revolution right so these
models are now producing new images that
you've never seen before they're
producing new text that you've never
seen before they can generate pieces of
music and that's because it's the flip
side of that coin the first stage is
understanding and classifying if you
like the second stage having done that
well enough you can then ask the AI to
say given that you understand you know
what a dog looks like now generate me a
dog with your idea of pink with your
idea of yellow spots or whatever and
that is an interpolation it's a
prediction of the space between two or
three or four
Concepts and that's what's produced this
generative AI revolution in all of the
modalities as we apply more computation
to this process so we're basically
stacking much much larger AI models and
we're stacking much much larger data the
accuracy and the quality of these
generative AIS gets much much better so
just to give you a sense of the
trajectory we're on with respect of
computation over the last 10 years every
single year the amount of compute that
we have used for The Cutting Edge AI
models has grown by 10x so 10x 10x 10x
10x 10 times in a row now that is
unprecedented in technology history
nowhere else have we seen a trajectory
anything like that over the next 5 years
We'll add probably three or four orders
of magnitude basically another thousand
times the compute that you see used
today to produce gbt 4 or the chat model
that you might interact with and it's
really important to understand that that
might be a technical detail or something
but it's important to grab like sort of
grasp that because when people talk
about gbt 3 or gbt 3.5 or gbt 4 the
distance between those models is in fact
10 times compute it's not incremental
it's exponential and so the difference
between gbt 4 and gbt2 is in fact a 100
times worth of compute the largest
compute infrastructures in the world
basically to learn all the relationships
between all the inputs of all of this
raw data so what does that mean what
does that entail enable them to do in
the next phase we'll go from being able
to perfectly generate so speech will be
perfect video generation will be perfect
image generation will be perfect
language generation will be perfect to
now being able to plan across multiple
time Horizons so at the moment you could
only say to a model give me you know a
poem in the style of X give me a new
image that matches these two Styles it's
a sort of oneshot prediction next you'll
be able to say generate me a new product
right in order to do that you would need
to have the AI go off and do research to
you know look at the market and see what
was potentially going to sell what are
people talking about at the moment it
would then need to generate a new image
of what that product might look like
compared compared to other images so
that it was different and unique it
would then need to go and contact a
manufacturer and say Here's the
blueprint this is what I want you to
make it might negotiate with that
manufacturer to get the best possible
price and then go and Market it and sell
it those are the capabilities that are
going to arrive you know approximately
in the next 5 years it won't be able to
do each of those automatically
independently there will be no autonomy
in that system but certainly those
individual tasks are likely to
so that means that presumably the
process of innovation becomes much much
more efficient the process of managing
things becomes much more efficient what
does that mean and let's let's stick
with the upside for the moment I will I
promise you we'll get to all the
downsides of which there are many but
but what is that going to enable us to
do I mean people talk about AI will help
us solve climate change AI will lead to
tremendous you know improvements in
healthcare just talk us through what
some of those things might be so we can
see the upside
intelligence has been the engine of
creation everything that you see around
you here is the product of us
interacting with some environment to
make a more efficient a more a cheaper
table for example or a new iPad if you
look back at history you know today
we're able to create we're able to
produce a kilo of grain with just 2% of
the labor that was required to produce
that same one kilo of grain 100 years
ago so the trajectory of Technologies
and scientific invention in general
means that things are getting cheaper
and easier to make and that means huge
productivity gains right the insights
the intelligence that goes into all of
the improvements in agriculture which
give us more with less are the same
tools that we're now inventing with
respect to intelligence so for example
to stay on the theme of Agriculture it
should mean that we're able to produce
new crops that are drought resistant
that are pest resistant that are in
general more resilient we should be able
to to tackle for example climate change
and we've seen many applications of AI
where we're optimizing existing
Industrial Systems we're taking the same
big cooling infrastructure for example
and we're making it much more efficient
again we're doing more with less so in
every area from healthare to education
to Transportation we're very likely over
the next two to three decades to see
massive efficiencies invention think of
it as the interpolation I described with
respect to the images the the the AI is
guessing the space between the dog the
pink color and the yellow spots it's
imagining something it's never seen
before and that's exactly what we want
from AI we want to discover new
knowledge we want it to invent new types
of science new solutions to problems and
I think that's really what we're likely
to get we I believe that if we can get
that right we're headed towards an era
of radical abundance imagine every great
scientist every entrepreneur you know
every person having the best possible
Aid you know Scientific Advisor research
assistant chief of staff tutor coach
Confidant each of those roles that are
today the you know exclusive Preserve of
the wealthy and the educated and those
of us who live in peaceful civilized
societies those roles those capabilities
that intelligence is going to be widely
available to everybody in the world just
as today no matter whether you are a you
know a millionaire or you earn a regular
salary we all get exactly the same
access to the best smartphone and the
best laptop that's an incredibly
meritocratic story which we kind of have
to internalize you know the Best
Hardware in the world no matter how rich
you are is available to at least the top
two billion people
and that is I think that is going to be
the story that we see with respect to
intelligence all right enough upbeat
stuff that was that was we've had 20
minutes of upbeat which is more than
you've had in most of the the interviews
you've done uh but you didn't call your
book you know the coming Nana you called
it the coming wave and I'm told that you
were thinking that the original title
was going to be containment is not
possible I'm glad you didn't call it
that it wouldn't have sold so well uh
but explain the argument you're make
making is not actually nirada is around
the corner in fact it's a much much more
subtle argument than that so tell us
what the downsides are and what it is
that your book the focus on containment
is in the book is about yeah I mean I I
think I'm pretty wide-eyed and honest
about the potential risks and you know
we if if you take the trajectory that I
predicted that more powerful models are
going to get smaller cheaper and easier
to use which is the history of the
transition which is the history of every
technology and you know value basically
that we've created in the world if it's
useful then it tends to get cheaper and
therefore it spreads far and wide and in
general so far that has delivered
immense benefits to everybody in the
world and it's something to be
celebrated proliferation so far has been
a really really good thing but the flip
side is that if these are really
powerful tools they could ultimately
Empower a vast array of bad factors to
destabilize our world you know everybody
has an agenda has a set of political
beliefs religious beliefs cultural ideas
and they're now going to have an easier
time of advocating for it you know so at
the extreme end of the spectrum you know
there are certain aspects of these
models which provide really good
coaching on how to manufacture
biological and chemical weapons it's one
of the capabilities that all of us
developing large language models over
the last year have observed they've been
trained on all of the data on the
internet and much of that information
contains potentially harmful things
that's a relatively easy thing to
control and take out of the model at
least when you're using a model that is
manufactured by one of the big companies
they want to abide by the law they don't
want to cause harm so we basically
exclude them from the training data and
we prevent those capabilities the
challenge that we have is that everybody
wants to get access to these models and
so they're widely available in open
source you know you can actually
download the code to run albeit smaller
versions of Pi or chat GPT for no cost
and if that trajectory continues over 10
years you get much much more powerful
models that are much smaller and more
you know
transferable and you know people then
who want to use them to cause harm have
an easier time of it I think that's a
really important distinction that there
are you know the leading companies you
Google deep mine you know open AI who
have the biggest models now and they're
a relatively small number of these ones
and they are bigger and more powerful
but not far behind are a whole bunch of
open- source ones and so the question is
then for your containment c c can you
prevent the open-source ones which will
potentially be available to the you know
angry teenager in his garage or her
garage can those ones be controlled or
not okay the darker side of my
prediction is that these are
fundamentally
ideas you know they're they're
intellectual property it's knowledge and
knowhow an algorithm is something that
can largely be expressed on three sheets
of paper and actually is readily
understandable to most people you it's a
little bit abstract but it you can wrap
your head around it the implementation
mechanism you know requires access to
vast amounts of compute today but if in
time you remove that constraint and you
can actually run on a phone which you
ultimately will be able to do in a
decade then that's where the containment
challenge you know comes into view and I
think that there are also risks of the
central centralized question right this
is clearly going to confer power on
those who are building these models and
running them you know my own company
included Google and the other big Tech
providers so we don't eliminate risk
simply by addressing the open source
Community we also have to figure out
what the relationship is between these
super powerful tech companies that have
lots of resources and the nation state
itself which is ultimately responsible
for holding us accountable so let's go
through some of the most s of frequently
cited risks or indeed negative
consequences and and the one that that
you hear a lot is as AIS become you know
equivalent to or exceed human
intelligence across a wide range of
tasks there won't be any jobs for any of
us you know why would you employ a human
if you could have an AI so history
suggests that that's bunam you know
we've never yet run out of jobs and you
know being a good paid up Economist I
think it's a lump of Labor fallacy but
lots and lots and lots of people say
this what's going to happen to the jobs
where are you on that well let's just
describe the lump of Labor fallacy
because I think it's important to sit
with that because that is the historical
Trend so far what it basically means is
when we have when we automate things and
we make things more efficient we we
create more time for people to invent
new things and we create more health and
wealth and that in itself creates more
demand and then we we end up creating
new goods and services to satisfy that
demand and so we'll continually just
keep creating new jobs and roles and you
can see that in the last couple decades
there are many many roles that couldn't
even have been conceived of 30 years ago
from App designer all the way through to
the present day prompt engineer of a
large language model so that's one
trajectory that is
likely I think the question about what
happens with jobs depends on your time
Horizon so over the over the next two
decades I think it's highly unlikely
that we will see structural
disemployment where people want to
contribute their labor to the market and
they just can't compete I think that's
pretty unlikely there's certainly no
evidence of it in the statistics today
beyond that I do think it's possible
that many people won't be able to even
with an AI produce things that are of
sufficient value that the market wants
them and their AI jointly in the system
I mean AIS are increasingly more
accurate than humans they are more
reliable they can work
24/7 they're you know more stable and so
you know I I I think that that's
definitely a risk and I think that we
should lean into that and be honest with
ourselves that that is actually maybe an
interesting and important destination I
mean work isn't the goal of society
sometimes I think we've just forgotten
that actually society and life and
civilization is about well-being and
peace and prosperity it's about creating
more efficient ways to keep us
productive and healthy many people you
know probably in this room and including
us enjoy our work we love our work and
we're lucky enough and we're privileged
enough to have the opportunity to do
exactly the work that we want I think
it's super important to remember that
many many people don't have that luxury
and many people do jobs that they would
never do if they didn't have to work and
so to me the goal of society is a quest
for radical abundance how can we create
more with radically less and liberate
people from the obligation to work and
that means that we have to figure out
the question of redistribution and
obviously that is an incredibly hard one
and obviously I address it in the book
but is that's the thing that we have to
focus on what does taxation look like in
this new regime how do we capture the
value that is created make sure that
it's actually converted into Dollars
rather than just a sort of value add to
to GDP so we're going to get on to
redistribution of the role of government
in just a second but first to REM remind
you and I should have said this at the
beginning Mustafa and I are going to
talk for perhaps another 15 20 minutes
but then we're going to open it up to
questions and for those of you who are
watching on the live stream feel free to
start asking them now because if this
little AI that I have here is telling me
that calls and notifications will be
silenced that's not very helpful yeah
now I've got an answer I do see the
questions there so um please start
writing in the questions and we will get
to them in about 15 minutes but okay
roll of government you need to have um
you will in this world need more radical
redistribution but one of the concerns
is that Ai and the rise of AI makes
actually the functioning of democracy
ever harder we're already seeing lots of
concerns about you know deep fakes
wrecking the 2024 elections 4 billion
people live in countries that will have
elections next year people are worrying
about 2024 never mind 28 or 34 and we
just um Mustafa and I just had a
conversation with yal Harari who is as
pessimistic as you are um thoughtfully
optimistic uh who basically said it was
the end of democracy um uh I'm not sure
that either you and I agreed but what is
the consequence for Liberal democracy in
the coming decades in this world of AI
look I think the first thing to say is
that the state we're in is is pretty
bleak I mean trust in in governments and
in politicians and the political process
is as low as it has ever been um you
know in in fact 35% of people
interviewed in in a Pew study in the US
think that army rule would be a good
thing so we're already in a very fragile
and anxious State and I think that the
you know to sort of empathize with you
Val for a moment the argument would be
that you know these new technologies
allow us to produce new forms of
synthetic media that are persuasive and
manipulative that are highly
personalized and they exacerbate
underlying fears right so I think that
is a real risk we have to accept that
it's going to be much easier and cheaper
to produce fake news right we have an
appetite an insatiable addictive
dopamine hitting appetite for untruth
you know it sells quicker it it spreads
faster and that's a foundational
question that we have to address I'm not
sure that it's a new risk that AI
imposes it's something that Ai and other
Technologies accelerate you know in and
that's the challenge of AI That's that
is a good lens for understanding the
impact that AI has in general it is
going to amplify the very best of us and
it's also going to amplify the very
worst of us and what about the fact that
this is developing in a world which
geopolitically is split in a way that it
hasn't been at least in the last couple
of in the postc Cold War World at all so
we have the tensions between the US and
China we have essentially a a sort of
race for Global dominance between these
two regimes in that kind of a world how
can you achieve the sort of governance
structures that you write about in your
book that are needed to try and you know
perhaps prevent the most extreme
downsides of AI yeah I mean much as I've
been accused of being an optimist about
it I've also been accused of being a
utopian about the interventions that we
have to make um and I think that
unfortunately that's just a statement of
fact what's required is good functioning
governance and oversight I mean the the
companies are open and willing to expose
themselves to audit and to oversight and
I think that is a unique moment relative
to past generations of tech CEOs and
inventors and creators across the board
we're being very clear that the
precautionary principle is probably
needed and that's a moment when we have
to go a little bit slower be a little
bit more careful and maybe leave some of
the benefits on the tree for a moment
before we pick that fruit in order to
avoid harms I I think that's a pretty
novel you know setup as it is but it
requires really good governance it
requires functioning democracies it
requires good oversight I think that we
do actually have that in Europe I think
that the EU AI act which has been in
draft now for three and a half years is
super thorough and very robust and
pretty sensible um and so in general
I've been you know a fan of it and kind
of endorsing it but people often say
well if we get it right in the UK or if
we get it right in Europe and the US
what about China I mean I hear this
question over and over again what about
about China and I I think that's a
really dangerous line of reasoning first
it sort of demonizes China as though
China has this sort of like maniacal
suicidal mission to at all costs at any
cost you know sort of take over the
world and you know be the next dominant
Global power I mean so far I don't see
any evidence of of that I mean you know
I'm not ruling it out I'm not a you know
sympathizer but I I think we should just
be wide-eyed about the actions they're
actually taking at the moment they have
a self-preservation Instinct just as we
do and the more that we can appeal to
that you know desire to you know have
their citizens benefit from economic
interdependence and from peace and
prosperity and well-being we're both
aligned in those incentives I think the
second thing is it's dangerous to sort
of point the finger at you know China
because actually we can't just have a
race to the bottom on values we have to
decide what we stand behind right if
we're not you know I mean I I'm a
believer that we shouldn't have a large
scale State surveillance apparatus
enabled by AI um we shouldn't do that
just because China are doing it we
shouldn't get into you know an arms race
and take risks just because they're
taking those risks and that's difficult
for some people to accept because you
know they might be hyper pragmatic and
you know I think that that only leads to
an inevitable self-fulfilling prophecy
that we both end up taking terrible
risks which are unnecessary so what
should government or let's be concrete
what should this government we're in the
UK and presumably most people here are
from London the the British government
wants to be the superpower of AI an AI
superpower um and is having an AI
conference on AI safety in November
there's a big Focus here what
should this government or indeed other
governments be doing concretely to
minimize the risks what should be is
there stuff that should be banned now is
there rules that rules of the road that
should be put in place so so the first
thing is that governments have to build
technology you know we've we've got into
this habit of Outsourcing and
commissioning third parties to create
technology and I I think it's really
difficult to be able to control what you
don't understand and unless you build it
you don't deeply understand it so I
think that's just the first thing which
in itself is very controversial when I
propose that in government people sort
of throw up their hands and there's a
lack of will there's a lack of
self-confidence there's a lack of belief
that government can be a creator a maker
especially on the technology front to do
that I think the second thing is that we
have to have deeply Technical and
Engineering people as well as you know
technologists more generally in cabinet
positions and at the heads of every
government Department you know it's it's
pretty crazy to me that we don't have a
CTO a chief technology officer in
cabinet you know running our big
institutions all of that is outsourced
the challenge is to be able to do that
you just have to pay close to private
sector salaries again another highly
sensitive topic that no one wants to
talk about should never earn more than
the Prime Minister you know to me this
makes no sense how can we have an open
labor market where on the one hand we're
saying to people you know go work for
whoever you like and on the one hand you
know people are being paid 10x and on
the other we're saying well take this
huge sacrifice in the name of Public
Service the Practical reality is that if
that happens over many decades the net
effect is that you have quity of one
type over here and another type over
there and that's really what we're
facing we have to confront that reality
it's very difficult for people to accept
that we should be paying super large
salaries it creates other issues around
how we hold you know those kinds of you
know people accountable given how much
of the public purse they might be
earning
Etc but fundamentally those two things
enable a third thing which is
governments have to take risks with
regulation there is a fear that
governments act too aggressively or too
experimentally and upset the big
companies and you know as someone who's
on the receiving end of this quite a lot
and have been in the past where I you
know mistakes have been made I still
think the right thing to do is to give
governments a break let them make
mistakes let them make investments that
don't work praise the experimental
government structures have faith in the
political process participate encourage
it because otherwise you know there's
just this spiral of decline this sort of
lack of confidence that we can actually
do the right thing that we should do the
right thing and then that ultimately
leads to the self-fulfilling prophecy
much like with China and do you think
that your view is the exception in your
industry I mean The Stereotype is a
bunch of 30-year-old Tech Bros who you
know think the government is useless and
who are going to kind of change the
world with AI and you know we're going
to do this is that is that an accurate
stereotype are you the exception I mean
there is a you know should we worry
about the huis of people in your
industry I I think you know we have
polarization everywhere so the The
Stereotype is probably true but the the
counter is that you know you know we we
can do it without technology and I think
that's totally wrong like technology is
an absolutely necessary but not
sufficient part of the process and I I
think that some people in silic like
Silicon Valley does have a tendency to
be much more techn libertarian there's
no question about that the government is
the problem that the objective is to of
eradicate the state and run it
completely independently and I'll be
honest there are some very very
influential very powerful people who
have that objective are building towards
that objective with both their companies
and their fortunes and you know I'm I'm
very skeptical of them and I you know
obviously I'm on the other side of that
and and that's what shapes a lot of the
public fear about this that you have a
bunch of hyper powerful people who are
shaping this um with without much with
kind of disdain for the the state and
the Democratic process two two quick
questions for me which I know someone
would ask otherwise and then we're going
to audience questions the first one is
the whole question of the
singularity we can't have a conversation
about AI without the singularity will it
happen when will it happen I honestly
think it's a very unhelpful framing of
what's to come and people jump to this
framing because it's easy to point to
Terminator and Skynet but it's it's
almost like leaping to the Moon before
we've even invented the transistor I
mean it's a it's hundreds of years away
I it's really unhelpful there are many
practical near-term operational
capabilities that you can predict just
as I've tried to describe and you can
then use those to wrestle with what are
the consequences for the nation state
how does this change our businesses what
does this mean for our governments so in
general I don't make those predictions
I'm very skeptical that the
superintelligence framing is is useful
to us what about the other one that you
know backyard wannabe AI Comm always
talking about which is the odds of
existential catastrophe what are the
odds that we will wipe ourselves out
with this again I mean I think very very
low I I really think what's very low I I
I think infanty small such that it's not
worth putting in the reason I asked you
that is because I asked one of your um
someone somewhat similar to you what
this was oh very low they said and I
said what's very low oh about
5% go yeah yeah yeah so you think it's
infinites negative zero okay well that's
a good place to end on all right we're
going to open now to your questions and
questions um from the online audience oh
this is a good question from Kitty
hadock who asks what will be the impact
of all that computer power on our carbon
emissions or will AI be able to enhance
productivity so we reduce carbon
elsewhere yeah another hot take on this
very low and really inconsequential the
amount of carbon that we spend on our
data centers is genuinely minuscule
relatively speaking
secondly most of that happens in
completely renewable data centers Google
and Microsoft are both entirely 100%
renewable Google actually owns the
largest wind farm uh largest set of wind
farms in the world um one of the
projects that I worked on whilst I was
at Deep Mind was making the entire
windfarm fleet 20% more efficient so you
know right from the outset they have
been focused on this I'm not saying
there aren't other environmental
consequences like the use of you know
galenium and Cobalt in the actual chip
manufacturer and so on but I honestly
think that relative to the benefits that
we're seeing and with respect to the
absolute cost of carbon per unit of
computation it's very very small and and
just to follow up to that because an
argument I have often heard is that the
cost of electricity and the access to
power will be a constraint on the
development of these AIS and their
proliferation do you also think that's
not true no I I I I think I think that's
not true I mean I think that's not true
I I I think that some data centers will
be at the 100 megawatt scale which is
maybe a singled digit percentage of a
small City's electricity consumption but
we're talking about a very small number
at the 100 megawatt scale I mean that
really is enormous nothing like that
exists today so don't don't worry about
the carbon consequences of the actual
AIS um from the audience questions here
yes lady here in the second row I'm not
quite sure what the does do you get a
microphone does it work that way y it's
on its way down I on its way down
here right here lady in the second row
there thank you um uh sheru from number
of Education
hello education companies that use AI um
my question to you is um if you think
about two industries say Healthcare and
education and you think about um the
applications that uh that that ai ai has
could you choose between the two which
you would hold um the most hope for and
um how should they be thinking about it
should they be thinking about procuring
it and how do you safely or procure it
well um or again as you said you could
produce it but some of those
organizations may not be in a position
to produce it anytime time soon so if
you're a procurer um how do you do that
well and what are some of the Frameworks
that should be used for that yeah thank
you that's a great question I mean on
the I'm probably most excited in terms
of the immediate near-term impact about
education I mean these models are
already being used I think the primary
use case of chat GPT is in fact homework
help and people often think oh my kids
are you know copying and copy pasting
but actually if you actually watch the
way they're using these models and many
people use our models up high for
exactly this reason it's a
conversational interaction much like an
enthusiastic teacher might speak to a
child about the interest that they have
so the child or the Learner in general
gets to phrase the question in exactly
their style picking on exactly the thing
that they're interested in asking the
odd obscure poorly phrased you know not
complete picture type question and of
course the AI is infinitely patient
provides really detailed mostly factual
information I mean it's not always
perfect but it will be perfect and I
think that's an unbelievable um
meritocratic gain for everybody I mean I
think we need to picture a world in 5
years time where the best education in
the world completely personalized
entirely factually accurate is available
to absolutely everybody who wants it on
the planet pretty much for
free which sounds amazing um how do you
go from where we are now to that uh to
that that world I I think the beauty of
the um of these models is that they have
an inherent tendency to proliferate and
get smaller I mean that this is the
upside of proliferation they spread
because everybody wants access everybody
wants to integrate them you know there
are so many competing models now the
cost of um the the the the cost of
buying model per word so if you're
building an app for example you'll go to
one of the three or four big model
creators and you pay per word that cost
has come down
70x since
January because we're all competing with
each other right so that means that you
can now take a regular app that you
might have been developing for you know
years in its current instantiation and
add a conversational widget in fact
we're doing this at The Economist with
the ecobot secret project underway
clearly not so secret
anymore thank
you
sorry and
you and you integrate you integrate the
conversational um element into your
existing workflow so you should be able
to ask any question in the style and the
theme of your brand about the specific
content that you have and it will be
like a widget it's like a plug andplay
widget that you can put anywhere in the
app and that's what I mean about
proliferation obviously everybody finds
that that useful and you'll be able to
use that tool as a in a low code or no
code environment it'll you know if you
see how the image generation models are
being integrated into Adobe today if
you're already a user of adobe you're
you're using the absolute Cutting Edge
AI models in a drag and drop way like no
training required you if you're building
a new website today it's drag and drop
you just grab a little widget and plop
it over here and suddenly you have you
know a YouTube player with your video
and suddenly you have a conversational
you know interaction with a language
model that is conditioned over all your
data so I think it's important to wrap
your head around the idea that this is
going to be widely available to
everybody there isn't going to be an
access issue and the risk and harm comes
from mitigating the downsides of the Bad
actors who might use you you know use it
for nefarious purposes but the upsides
are
incredible let's get a go lots and lots
of hands let's get yes lady there but
I'm going to get one from online while
you get your microphone and the one from
online is a question from sa Paulo gosh
your audience is is going from from a
long way um Rene dealo Jr asks when you
say we will solve this and that who is
this we Humanity a good question
corporations the UN or Elon
Musk I definitely hope it's not Elon
Musk I think of it as the kind of the
Comm community of researchers inventors
and creators there's this sort of
dialogue sometimes you see Snippets of
it on Twitter sometimes you see it in
the research papers that academics
publish you know you see it in the blogs
and the products that big companies
produce there is this sort of unfolding
you know evolving mold of an ecosystem
which is referencing each other creating
and evolving and so when I say we I
certainly don't mean me at inflection um
my current company I just mean the the
the ecosystem of humanity like we're
we're trending collectively in a
direction of invention and creation just
one tiny does that ecosystem include
Chinese
scientists so 10 years ago Chinese
scientists were not really part of the
conversation they weren't really very
relevant over the last 10 years they
have launched onto the scene producing
very high quality research creative
research you know the Old ST stereotype
was that they can only copy and steal
again I think a demonization partly by
Elon Musk actually who was a big
proponent of this idea that they were
just robbing our intellectual property
and there was some of that but largely
they were just as creative as us and
they wanted to get access to these tools
to build their own businesses and and
provide new products and services for
their own citizens for the same reason
as we do and so if you start from that
assumption then of course they're
participating in this ecosystem of
course they're creating incredible
models you know they have have their own
constraints with respect to censorship
and that has slowed them down by a
little bit but they're actually not
going to be that far behind now I mean
there are some issues with the export
controls and they don't have access to
Cutting Edge models but I don't think
that's going to hold them back for very
long interesting yes go ahead thanks
this is excellent uh my question is
about AI ideas and the people needed to
think of them and if you take someone
like Steve Jobs for instance you had a
very specific person very specific
interests and skills and talent to be
able to develop not only Tech technology
but the brand and a point of view on the
world that came with that do you think
AI would be capable of coming up let's
say with the the version of the Apple
idea now will it be in the future or
will it simply be a machination of past
information so I think people have often
characterized these AIS as regurgitating
their training data right uh or
reproducing whatever they have seen
previously and I think that's a kind of
misunderstanding of what they do they're
almost always doing interpolation the
thing I described earlier is predicting
the space between two ideas they're
saying let me mash together these two
concepts just like the dog and the
yellow spots and whatever or take your
pick of any com combination and that's
creativity you know fundamentally when I
invent something I'm really being
inspired by a huge range of different
experiences and ideas and I'm using
those to then produce a novel prediction
or generation at any given moment and
I'm testing it out and seeing if it's
you know useful or if it makes sense or
if it catches on and then it has a life
of its own and it's sort of independent
of me so I think for the next couple of
decades these AIS are going to Aid the
human in that process of creation and
invention and Discovery they're not
going to wander off and have their own
agency and do their own thing I mean
it's just not just not possible the
capabilities just aren't there and won't
be there in the near term to do that
right and so I think it's going to be
the human AI combo for a good time to
come that does the
creation exactly it's more of the
assistant exactly the brilliant
assistant um right let's go further back
yes gentlemen there for row
back let's get
you hi um Are you seriously trying to
suggest that the um no
that the AI companies are able to
self-regulate and didn't the banks prove
that that is an impossible concept but
the banks are highly highly regulated
and so not just by themselves but look
I'm absolutely not proposing
self-regulation I mean if if that came
across then I apologize I'm wrong I mean
in the in the book I really don't say
that I go to Great length to say that
independent external technical expertise
is required to do governance properly I
think the Practical challenge as you
know zany pushed back on me earlier
today when we were talking with youval
is where are these competent Regulators
who get the technical aspects where is
this Democratic process that gives us
confidence that we can appoint people to
to conduct that kind of oversight so I
think there's there's some pessimism
that they're capable of doing that that
should not mean that we sit around and
do nothing in the process um you know
for example we I visited President Biden
6 weeks ago now at the White House with
the other six AI companies Microsoft
meta Google deepmind etc etc and we
signed up to voluntary commitments that
were that are precursor to regulation
which the White House designed because
they realized they can't pass new
primary regulation anytime soon but the
voluntary commitments are very material
they we basically have said publicly we
expose our models to expert independent
scrutiny to Red Team or stress test find
weaknesses in our own models once we
identify those weaknesses we share them
with each other and we share them
publicly so in you know transparency in
you know the open light of day and we
know that that framework the voluntary
commitments are a precursor to an
executive order which is coming from the
president sometime in the next few
months there're also the basis for the
Prime Minister Rishi sunak AI Summit in
November in in in Bletchley Park where
you know many world leaders and all the
big tech companies are coming and those
voluntary commitments are going to form
the basis of the discussions for what
becomes binding not just in the UK but
hopefully worldwide so I'm totally with
you that we're not going for a
self-regulatory approach but you don't
you don't think there's a conflict of
interests well I I mean I I there's
definitely a conflict of interest of
course there's a conflict of interest I
mean we are a profitable a for-profit
company in fact I'm a public benefit
Corporation so I think it's a kind of an
important clarification um it's a new
type ofy closer to a borp um which is a
hybrid for-profit nonprofit Mission it
means that our directors have a legal
obligation to factor in the impact of
our activities on The Wider World both
the environment and people materially
affected by what we do who aren't just
our customers and that doesn't solve all
the issues with for-profit businesses
and the conflict that you described but
it's a first step in the right direction
and I I believe that that's how change
happens taking small steps in the right
direction
let's take a question from over there
yes gentleman quite near the back with
the white T-shirt y right
there hello
yeah hello yeah okay my question to you
as an electronics engineer is should we
now focus on the hardware part of it
considering there's a monopoly going on
and the concentration of chips to a
certain country the hardware part of it
is raising a very big question we saw it
in co uh things are really bad when
Harvest Supply goes down so is this a
great time to focus on Hardware
considering we are good with software
part for now that that's a great
question I mean we didn't really talk
about that too much here but you know
just just for everyone's benefit these
AI models are trained on gpus Graphics
processing units so chips that were
previously used for gaming for
representing Graphics in computers and
we take each one of these chips and we
daisy chain them together thousands and
thousands of times we have a computer at
inflection which is the size of four
football pitches and has 25,000 of these
chips Daisy chained together an enormous
cluster it's cost about a billion and a
half dollars now all of these chips are
manufactured by one company NVIDIA who
I'm sure people have heard have seen
their share price go up by
350% since January their chips are
manufactured entirely in one Factory
called
tsmc Taiwan semiconductor Manufacturing
corporation which is obviously in Taiwan
the key component of their chips of
their fabrication facility are
manufactured by one company called asml
a Dutch company so the supply chain is I
mean we can talk about how this happened
over 30 years but extremely narrow there
really are no competing providers that
are material at any of those three
stages as a result the good news is that
that means that there are choke points
that can be used by Regulators to
monitor who has access to the critical
chips that enable the training of the
models and of course restrict access to
certain people so I think I Loosely
alluded to the export controls a minute
ago which is a new piece of legislation
or a rule that the US Administration
imposed on China last year which
prevents China anyone in China any
manufacturer in China from getting
access to the latest version of these
chips which means that they won't be
able to train the gbt 5 level model a
number of people have referred to this
as a declaration of economic war on
China and so you know I think that we
have to be very cognizant of that
denying them access to that is likely to
deliver a significant Counterattack on
you know the West we have hugely
dependent on their supply chain in many
many respects so yeah chips are
absolutely at the heart of this both in
good and bad ways so if you're focused
on a chip company it's a big bet it
takes a long time to mature but it has
the potential to be the critical
component here just a followup question
yes uh do you think that open- Source
Hardware will help in creating a better
setup right now considering very few
companies are focusing on creating the
hardware and all of them are completely
non uh completely for profit so
something like Open Source Hardware
focusing more helping will it help us
create better computers creating better
models with less power yeah so I I think
open- Source Hardware is a serious
effort and just to clarify I mean open
source elements of Hardware design are
used in many many areas so open RAM for
example is a hardware designed for 5G
masts which ensures they're
interoperable it means that the software
that runs your t phone networks actually
can run on any type of Hardware because
the interface is standardized which is a
great thing for competition there isn't
a lock in between Hardware the builder
of the masts and the software the people
who run the operating system that sits
on top of that the downside of it is
that it has tended to be a bit more
flaky than the fully integrated side of
things so I think you should be
wide-eyed about it it isn't going to be
the Panacea to solve all of our problems
anytime soon let's take another question
there yes lady in the fourth
row thanks so much hi thank you both um
I'm javah Rari I lead digital regulation
work at Tech UK which is the uh digital
Tech trade body in the UK over a
thousand members um ranging from Big
Tech Deep Mind Google Mata all the way
through to cyber security providers
smmes um many of our members are
harnessing the really positive impacts
of synthetic media um but many are
becoming increasingly concerned
concerned with the rising malicious use
of deep fakes so everything from Revenge
pornography undermining digital ID
verification um fraud which is a big one
um in your opinion what should companies
do now to address the rising um kind of
problem of of deep fakes I know you
mentioned um voluntary Charters which we
already do with things like fraud um but
what should we do now yeah it's a great
question I mean I I think the first
thing to say is that politic iCal
parties and political campaigns
shouldn't be allowed to use AI
generators for their content I think we
should just start by taking that off the
table that's a precautionary principle
there potentially some downsides to that
but it feels like a safer and sensible
thing to do right the second thing to
say is that we shouldn't allow the big
Tech platforms so Facebook or Twitter or
anywhere where there's a broadcast of
information to have digital people
counterfeit digital people right so if
you you know have a handle zany on
Twitter for example only zany should be
allowed to represent as zany on Twitter
I shouldn't be able to come along create
a perfect synthetic fake of zany and
have that you know imitate her language
now I think that's a reasonably
straightforward sensible thing that all
the big Tech platforms will commit to it
doesn't address other platforms right
outside of you know the big big provider
and those tools and techniques are going
to be widely available again it's a
proliferation question it's going to be
really difficult to say to somebody well
you know you're using synthetic media to
generate a new product design or a new
fashion outfit or all these other good
uses um you're not allowed to have it
because there's a risk that you're going
to be able to generate some you know
deep fake I think we should also be like
wide-eyed about how quickly we adjust to
the risks you know like back in you know
20 odd years ago people were like well
we'll never be able to do Financial
transactions on the internet because
there's so much fraud right we're going
to be inundated with fraudulent activity
we do tens of trillions of dollars of
transactions it's completely transformed
our world and we have a minuscule amount
of Fraud and it's a constant back and
forth you know likewise with Spam
detection right we we everyone thought
we're going to be inundated with SC spam
we're going to produce all this
automated content increasingly the next
threat is that um you know older people
are being tricked by ai's that you know
can imitate the voice of say your
daughter or child who you know might be
asking you for a loan or something
there's this conman scam type thing
which is now a little like more more
possible and more capable of course
that's a new Threat Vector that causes
real harm on the flip side spreading
knowledge and information about it
there's a very very simple defense which
is just to say you know never provide
access you know to my account over the
phone right I'll never you know call you
out of the blue asking for that so we
adjust we adapt and it you know it
doesn't mean that we can eliminate all
of the harms but it means that like net
net we just have to be more resilient
and more focused on
adaptation gosh lots of questions yes
gentlemen there for Bros back and then
I'm going back to this
side yeah right
there thank you
uh think on AI um I look at the U
intelligence in the name of your company
it's intelligence square and that
reminds us that it is not just a new
type of Technology it's a new type of
intelligence so I agree with you
entirely I also agree with your view of
the world of abundance absolutely superb
I'm also an optimist but there is an
area of
contention is about super intelligence
and about the existential risk
I I must say that I've been shocked
hearing what you were saying and I just
challenge you on that on AGI which
artificial general intelligence which
many people think May um emerge within
the next 5 years or so apart from the
definition what it is let's make it very
simple that it will be smarter than
humans and if it is smarter than humans
then of course it can outsmart us set
its own goals and exponentially increase
its power and be in the extension threat
to us yep fair question and I I
certainly hear this a lot I I think that
there's a risk of anthropomorphic
projection like we we see a model that
is capable of generating images or
generating text and we assume that
therefore it is going to emerge the
capability to have its own goals or it's
going to emerge the capabil to be able
to update its own code or somehow it's
going to sort of naturally learn to
operate autonomously and then deceive us
and get out of the box and my belief and
I may be wrong but my firm belief from
all of my years of working in this field
is that those are capabilities that we
would choose to design into the model
that we would be able to observe and if
if they do if if someone does choose to
create those models then yes th those
capabili then yes there are risks you
know that they have that they could get
out of the box and they could be
uncontrollable and that's that is really
the program of containment it's
basically saying that it is conceivable
that these models could be used to do
really bad things over a couple of
decades and that they have to be
restricted very quickly you assume that
we'll only have one sorry that we will
have many agis and each of them may be
smarter than humans and some of them
won't be controlled by us and therefore
the risk is there I think thank you I'm
going to leave it at that because you
can imagine huge number of things and
there's a lot of actual hands gone up
with lots of questions so yes lady here
in the third
row hi uh I think we said we were going
to talk about
redistribution I just want to know what
you make of the kind of growing
disparity between the individuals the
people that provide the raw data that
make the realization of these kinds of
Technologies possible and those that
obviously control these Technologies to
whom the Lion's Share of the wealth
flows to so sort of how are we going to
address that kind of growing disparity
and how are we going to kind of
compensate people for what they give to
these systems ultimately yeah so it's a
good question so the way that these
models are trained today is that they
have scraped data that is available on
the open web so so far anything that you
put up on the web blog or a website is
the the the cultural and legal consensus
over the last 25 years has been that it
is fair game it's open to anybody to
read it to use it provided you don't
regurgitate it word for word so if you
copy an entire paragraph that is
copyright the the counterargument that
people in the big tech companies and
myself included are making is that we're
capturing the essence of these models
we're learning the style we're learning
the tone of text never reproducing the
underlying content and practically
speaking even you I don't think it is
possible to capture the dollar value and
return 0.001 cents to a creator of a of
a website but the sorry go ahead if
that's if that's going to automate
people's labor then we need to find some
way of redistributing that you said that
there was no you kind of commented
saying that there was no uh displacement
that we kind of feeling what about the
SAG strikes or what about you know
arguably the rmt are striking because
they are being automated to some degree
their jobs are being automated we are
seeing some to some degree impact
material impacts of this now yeah yeah
the rmt for sure although not by AI by
General course but it's still I mean how
do you define artificial intelligence in
general so do you think that that is
going to be a growing part of Labor
Relations going forward is the side the
the directors and actors strike the
beginning of something that you're going
to see elsewhere is this going to be a
real fight there's clearly a fight for
copyright already in my industry we're
Furious that you've just sucked up all
of our data without telling us yeah yeah
totally the the the transition is going
to be painful for sure so at the moment
taxation in the US is on average 25% for
labor so anyone who's working on average
25% the tax on software is only 5% and
taxes a tool for incentivization right
so we should think about it as a tool
for adding friction in the areas that we
want to go slower and speeding up the
things that we want to you know go
faster it's pretty much as simple as
that if you add a huge taxation burden
then yeah you're going to slow down
Innovation but it is going to keep
people at sag in work for longer and
those are rules that we get to make
there choices that we get to make and I
think that's exactly the discussion that
we that we should have there's a
question uh from online which is related
to this and it's a sort of question
about the big picture outcome will AI
make inequality better or
worse so the extremes of
inequality are going to continue so take
the those who currently have access to
power and resources are going to be the
first to adopt new technologies you know
we've already seen that like big tech
tech companies that have vast cash
reserves that can hire the best people
that can acquire the most amount of data
and compute are moving faster than ever
before on the flip side this revolution
is one that is also being led by the
open source movement I mean that's kind
of incredible I mean PE like today you
can get an absolute Cutting Edge model
that is say 18 months behind that costs
less than $2,000 to train and it'll be
as good as GB bt3 right so that was The
Cutting Edge 18 24 months
ago that trajectory is going to continue
so I think that the open source movement
is always going to be 18 to 24 months
behind at least for the next 5 years
until they get till the models get
really really big and that's an amazing
story for for inequality I mean it's a
very meritocratic moment that whatever
your job whatever your app that you're
developing you'll be able to integrate
these tools into your own workflow very
very cheaply and easily I mean as I said
the cost of even using one of the best
models in the world has dropped 70x in
the last year so I think that on the
face of it that does good things for you
know equality of access what you can't
stop is that the very top people race
away the fastest and I I don't know what
that trajectory ends up looking like
thank you yes gentleman
there five rows back I'm going to try
and get through everyone's questions
thank you for the talk has been very
very interesting and also great
participation um I want to bring it a
bit back to The Human Side we talk a lot
about tech so how do you think about
Solitude and having a positive
relationship with technology and I'm
thinking I tried pie it's amazing what
if it becomes better than all of my
friends together what if it gives me all
of the best ideas I'm like why should I
hang out or come to this event or do
things in real life that is a really
really good question so for those who
don't know I mean we've designed pi to
be an amazing conversational
friend so if you use chat GPT you can
say generate me a business plan or a
marketing strategy or a poem or a travel
itinery Pi is much more like talking to
a best friend or a confidant it's super
fluent and high EQ it asks you
clarifying questions it rephrases what
you've said it reflects back what you
said it's very relaxed and supportive
it's extremely non-judgmental no matter
how awful your racist diet tribe or the
vent that you need to get out about you
know your horrible colleague at work is
very patient and supportive and you know
I think that's an amazing contribution
to the world providing people with a
supportive companion but we've also
designed pi to encourage you to talk
about your friends and to get out right
it is explicitly trying to help you have
a place to simulate and practice if
you're feeling ious but reconnect with
other friends and so the values that we
bake into these models and what we
actually mean in practice by quote
unquote safety first that's the key
conversation that we have to have today
because the mental model that we've got
to accept is that we're not going to
struggle with hallucinations which
everyone talks about we're not going to
struggle with bias focus on the world
the challenges that we have in the world
when we have Perfection you know and
that's exactly what I think you're
getting at which is what a what do we do
if it really is a place that does make
me you know a relationship that makes me
smarter that makes me feel calmer that
makes me feel more kind more optimistic
more respectful of myself the reality is
that's the trajectory that we're on but
if we're on that trajectory to follow up
with a a reference to an incident that
I'm sure many of you all know about that
New York Times journalist who had a
fairly weird interaction with an early
prototype of chat upt which said you
know why aren't you leaving your wife I
know I you really love her is that a
risk with
pi no so so Pi Pi is currently the
safest AI in the world today none of
those provocations work so Pi knows that
it is an it it knows it's an AI if you
try to flirt with pi if you if you you
know try to have a romantic relationship
with pi it's extremely clear and
resistant again it doesn't judge you uh
or tease you it sort of it it tells it
push it po you off without it'll keep
you at a distance it'll keep you at a
distance say look I'm I'm not designed
to do that I can't do that for you these
are boundaries am an AI I am not design
you are trying to flirt with me I am I
can't go there boundaries are critical
boundaries are what give us the you know
feeling that we have control and that
established trust between us so it's a
very important part of Pi's design and
it doesn't suffer that and we We R teams
that a great deal okay thank you yes uh
lady here in the fifth row yeah go ahead
just picking up on your comment about
politics um if it was up to you at what
point would you allow politicians to or
political parties to use Ai and how
could
AI help politics be the best of itself
best it could
be it's a great question I mean and I
hope that an AI like Pi not only makes
you more kind and respectful to yourself
more forgiving of yourself but also in
doing so makes you more kind and
respectful to other people I mean I
think we've just become so overwhelmed
by an adversarial politics and social
media and celebrity culture I think you
know I hope that these AIS can help you
to imagine and model simulate and
practice you know those kinds of more
respectful and pro-social behaviors I'm
not itching to give politicians access
to AIS as decision makers yet I think
we're a long way from that and I think
people often imagine that it could be
the ultimate strategist or it could have
the the ultimate policy insight and I
think for now I think I'm much more
focused on you know the emotional
intelligence that it can all give
us there's a question up here can we get
the microphones up up there there's a
gentleman there on the balcony I'm sorry
I hadn't seen you up
there hello um I'm Thal I'm a data
scientist and and a government
contractor um I've worked on simulating
covid and developing autonomous weapons
uh so tonight we've spoken about the the
recent AI wave uh of large language
models so my question is to you um
what's what's your kind of uh take on
the the kind of feeling within the large
eye houses today that can we can we
still keep the progress of AI alive
um with just this current paradigm of
large language models and what is the
desire to push forward and explore new
model classes uh new paradigms of
AI yeah so I mean I think um I don't
think there's any risk of progress in AI
slowing down anytime soon I mean I think
some people have been afraid that you
know we're going to sort of regulate the
progress out of the system I I
personally think that that is extremely
unlikely at this point I mean the the
the ch of the last century has been in
inventing and creating new technologies
and and powers and I think that the
challenge has now flipped the challenge
of the next you know few decades is
going to be in containing and shaping
those Powers so that they always work
for us I think that large language
models and deep learning itself large
language mods are a version of Deep
learning that that you know ecosystem of
invention has already been opened it's
set on its course and I I don't think
we're lacking any fundamental algorithms
or that you know another I mean I'm not
sure you referencing about other methods
but I'm not convinced that we need other
methods to make progress I was I think I
was kind of meaning it in the sense of
uh contined progress now leading to a
general form of intelligence do you
think the large language model Paradigm
is enough because it's it's from my
point of view it's still an open
question right I think that most of the
capabilities that would be involved in a
properly general intelligence like the
AGI that were described would be
engineering decision decisions there
would be ways that you organize the
current set of tools to do for example
recursive self-improvement to do
self-supervision self- goal definition
um those are those are I think
engineering capabilities which we can
choose to do or not in the next 10 years
awesome thank you so much thank you the
um lady back there on the yes s of two
rows from the back we' probably got time
for two or three more
questions uh thank you uh
congratulations MFA you've achieved many
things and um incredible that you've
written a book I actually wanted to ask
about you what motiv motivated you to
actually write this book whilst you were
starting this company uh and also thank
you for un ask myself that same thing
yeah well also like exposing yourself to
answering every single question on the
future of humanity and progress so
than great question go ahead why did did
you write this book and why are you here
tonight answering these questions I I
couldn't help but write it I I wanted to
be on record making a prediction about
how I think things are going to unfold
um in order to sort of look back in a
decade and calibrate and see you know
was I as zany sometimes thinks too much
on the catastrophy side of things too
dark about my predictions do I have my
own like sort of am I over obsess with
pessimism or indeed the reverse like am
I dismissing existential threat you know
risks too much maybe that's real and I
think the rigor of putting something out
that other people can critique and
really researching I mean I really
really did research a lot of the
historical Trends as well was just a
very kind of satisfying thing to do so
from a selfish perspective it was really
about sort of trying to be articulate
and clear about my predictions so that I
could validate them in the future and
then have an excuse to spend time every
morning you know before I start work
writing and and reading and researching
and trying to you know see what history
has to teach us about this I mean the
first three or four chapters are mostly
about the historical basis for
proliferation and general purpose
Technologies over here one more question
over here yes there the of three words
in the
back thank you MF I thought that was
really interesting um you've laid out
quite a compelling in case for the next
5 years and the likely trends for AI but
I'd be quite curious to know what's
taken you by surprise in the past couple
of years has there been any developments
in AI which you perhaps didn't manage to
predict or anticipate I'd like to know
what surprised you the most
recently
so initially there was a fear that we
would never be able to um control
the the quality of that output so two
years ago we thought that bias was going
to be the big challenge which all we
were talking about was we bad training
data in producing toxic Generations this
thing is going to constantly make things
up it's going to constantly hallucinate
um and what we have found empirically
observed is that with each order of
magnitude more investment in
compute um the models get easier to
control we can create very precise and
detailed behaviors you know the the tone
of Pi that I just described to you and I
hope people would try it I mean we you
can actually phone Pi you can speak to
Pi in fluent natural language just as
you would on a normal phone call and it
will speak back to you in one of five
different voices and what what's the
choice of
voices you'll have to they're called p v
V1 2 3 4 and five so you'll have to
guess we we deliberately designed it to
be not to be age neutral gender neutral
and hopefully race and accent neutral so
it actually is quite varied sometimes it
sounds a little Australian sometimes a
little English it's very subtle it's not
like annoyingly all over the place but
we try to capture the essence of what is
AI like you know we spend so much time
thinking about like what is humanlike
what are humanlike capabilities well I I
wanted pi to be true to what it is to to
to be true to what it is to be an AI and
an AI is a product of all the training
data and all the people that you know it
has interacted with so we try to not
make it too you know much like one you
know character in our world um so that
that that was the surprising thing is
that the models got easier to control
and it's almost like having a new clay
you know it's this new design material
that you can you know shape into almost
a personality you know it's a really
precise um yeah Clay and I think that's
been super exciting and very very
creative and I'm I'm just so excited
because in the next few years you know
we've been one of the first in the world
to get access to it because we're lucky
and you know our position everything but
in the next few years many many people
are going to have access to the same
Tools in just natural language we just
able to give an instruction or low code
no code environments where you drag and
drop and plug and play and I think
that's going to be a really really
amazing time to see what people do with
it I think we have time for one more
question from the floor and I'm sorry
that there are lots that we we haven't
got to right let's go to the back
there I know it's hard to get to
but hi um thank you so I just want to um
ask a question with regards to the
governments and also who's going to
benefit most and who's going to be left
behind uh when we think about emerging
markets and developed economies you
mentioned that for example p can respond
in five different tones but in terms of
languages or parts of the world where
where English is not that the proficient
how how will the impact be felt there
it's a great question I mean Pi already
speaks about 25 languages um it's really
good in the major languages Spanish
French German and so on it's much less
good in Japanese Mandarin Arabic Etc
certainly not good in the long tale of
languages um you know and I think it's
kind of remarkable that these models
have arrived you know with so many
capabilities in terms of their languages
simultaneously but I think it's going to
take us a few years before you know we
add the full Suite of languages and it's
not just languages but it's actually the
training data that reflects the cultures
of you know people outside of the
western world I mean English has
dominated you know over the last like
few centuries and most of our culture is
documented in English and that is
obviously a subset of all culture so
there's a clear representation question
there which you know I think is going to
be challenging when it comes to you know
smaller communities so I want to
conclude by seeing what impact you've
had on the crow I should have asked this
at the beginning but let's at the end
put up your hand if you think that
having heard all of this the net impact
of AI for Humanity is going to be
positive okay and just as a check those
of you who think it'll
negative well it's not overwhelming but
the positives clearly win out Mustafa uh
I think after you've read mustafa's book
you will have a I think a very clear and
sober sense both of the potential
benefits but also of the risks you're
not a wild-eyed you know panglossian
about this it's very serious it's a
really excellent book I recommend it
Mustafa thank you for joining us thank
you so much thank
you thanks a lot well done
[Applause]
Voir Plus de Vidéos Connexes
Beyond the Hype Impact of ChatGPT & Generative AI 2 | Zuno Weekly Events
Will AI Steal Your Job?How AI Will Transform Daily Life
AI Just Changed Everything … Again
As artificial intelligence rapidly advances, experts debate level of threat to humanity
Is a ban on AI technology good or harmful? | 60 Minutes Australia
Mustafa Suleyman & Yuval Noah Harari -FULL DEBATE- What does the AI revolution mean for our future?
5.0 / 5 (0 votes)