AGI by 2030: Gerd Leonhard Interview on Artificial General Intelligence
Summary
TLDRThe speaker discusses the potential of artificial general intelligence (AGI), predicting its advent around 2030, and emphasizes the need for regulation to prevent misuse. They highlight the economic benefits of intelligent assistance in practical tasks but warn of the risks of dependence and societal shifts. The speaker advocates for a nonproliferation agreement to prevent uncontrollable AGI development and stresses the importance of collaboration and alignment in AI progress.
Takeaways
- đ The advent of artificial general intelligence (AGI) could be as close as five years away, with a conservative estimate being around 2030.
- đ€ Intelligent assistance, which includes practical applications like controlling emissions, protein folding, scheduling appointments, and translation, is already making life more efficient and is not inherently dangerous.
- đ The economic impact of AI is significant, as it can increase the GDP, but it does so unevenly, benefiting those already in advantageous positions more than others.
- đ AI can drastically increase efficiency in various jobs, potentially leading to a 3-5x improvement, but this also raises concerns about job displacement and economic inequality.
- đ The development of AGI could lead to most people becoming unemployed, as machines would be able to understand and perform tasks across all domains, making human work redundant.
- đ« There is a call for a nonproliferation agreement for AGI, similar to nuclear weapons, to prevent uncontrollable and self-replicating superintelligence from being developed.
- đ The current trajectory of AI development is driven by profit, often at the expense of broader societal and environmental considerations, which could lead to significant negative consequences.
- đ Companies like Microsoft and OpenAI are seen as being in charge of public policy and national security by extension, due to their influence over AI development and its potential impact on society.
- đ The speaker advocates for a cautious approach to AI development, emphasizing the need for regulation, collaboration, and a focus on solving practical problems rather than pursuing AGI.
- đ The speaker expresses a cautious optimism about the potential of AI to solve major global problems like cancer, water scarcity, or energy issues, but is pessimistic about the likelihood of voluntary collaboration and alignment in AI development.
Q & A
When does the speaker predict the advent of artificial general intelligence (AGI)?
-The speaker predicts that the advent of AGI could be as close as five years away, but suggests 2030 as a safer estimate.
What is the speaker's view on the term 'intelligent assistance'?
-The speaker refers to 'intelligent assistance' as AI that can handle practical tasks such as controlling emissions, protein folding, scheduling appointments, and translation, which are beneficial and not inherently dangerous.
How does the speaker use AI in his personal life?
-The speaker uses a translation app called Ras to translate his keynote videos into Spanish and Portuguese, which has made a significant difference in his ability to communicate with a broader audience.
What economic impact does the speaker foresee from the use of intelligent assistance?
-The speaker believes that intelligent assistance can increase economic possibilities, such as enabling him to speak multiple languages and summarize legal documents quickly, similar to the impact of cloud technology.
What is the speaker's concern about the uneven increase in GDP due to AI?
-The speaker is concerned that the increase in GDP due to AI will be uneven, benefiting those who are already in a position to increase their wealth, and potentially exacerbating economic polarization.
How does the speaker view the role of companies like Microsoft and Open AI in the development of AI?
-The speaker is worried that companies like Microsoft and Open AI are in charge of public policy and national security issues related to AI, which he believes should not be the responsibility of private companies.
What is the speaker's stance on the development of superintelligence?
-The speaker is against the development of superintelligence, comparing it to the invention of the nuclear bomb, and believes it could lead to uncontrollable and dangerous consequences.
What advice does the speaker have for governments, users, and companies regarding AI?
-The speaker advises that there should be a nonproliferation agreement for building superintelligence, similar to regulations on nuclear weapons, and that companies should be licensed and supervised in their AI development.
What is the speaker's view on the potential societal impacts of AGI?
-The speaker is concerned about the potential societal impacts of AGI, such as unemployment, dependency on AI, and the side effects of AI like disinformation and bias.
What is the speaker's current outlook on the future of AI?
-The speaker characterizes himself as a cautious optimist, believing that while AI can solve many practical problems, there is a need for more collaboration and alignment to prevent negative consequences.
What is the speaker's campaign about?
-The speaker is campaigning for a framework that requires licensing and permission for companies to build AGI, emphasizing the need for regulation and collaboration to prevent misuse.
Outlines
đ€ The Economic Impact of Intelligent Assistance
The speaker discusses the potential arrival of artificial general intelligence (AGI) within five years, emphasizing the current role of intelligent assistance (IA) in practical applications such as controlling emissions, protein folding, scheduling appointments, and translation. The use of translation apps exemplifies how IA is making life more efficient, with economic benefits arising from increased communication capabilities. The speaker also highlights the potential for AI to increase GDP unevenly, favoring those already in advantageous positions, and the need to consider the broader societal and policy implications of AI development, particularly regarding companies like Open AI and Microsoft that are shaping public policy and national security.
đ The Unequal Growth of AI and Economic Disparity
This paragraph delves into the potential of AI to outperform human intelligence in computational tasks due to its lack of physical limitations. The speaker warns of the risks associated with dependency on AI, such as becoming overly reliant on digital assistants, which could lead to a loss of personal autonomy. The advice given to governments, users, and companies includes the need for a nonproliferation agreement to prevent the uncontrollable development of super intelligence. The speaker also addresses the challenges of regulating AI, comparing it to the regulation of nuclear weapons, and the importance of establishing rules to prevent existential risks associated with AI.
đ The Ethical and Regulatory Challenges of AI Development
The speaker presents a campaign advocating for the regulation and licensing of AGI development, likening it to the control of nuclear weapons due to its potential for catastrophic consequences. They argue that the pursuit of profit in AI development without considering its broader impact is irresponsible and could lead to societal collapse. The speaker calls for a shift in focus from profit to collaboration and alignment in AI development to ensure a positive outcome. They express a cautious optimism about the potential of AI to solve major problems but are pessimistic about the voluntary collaboration needed to achieve this without negative consequences.
Mindmap
Keywords
đĄArtificial General Intelligence (AGI)
đĄIntelligent Assistance (IA)
đĄEconomic Impact
đĄRegulation
đĄExistential Risk
đĄDependence
đĄUnemployment
đĄNational Security
đĄPolarization
đĄCollaboration
đĄSide Effects
Highlights
The advent of artificial general intelligence (AGI) could be as close as five years away, with a conservative estimate of 2030.
The concept of Intelligent Assistance (IA) is introduced, which involves AI in practical applications like controlling emissions and protein folding.
AI's role in enhancing efficiency in everyday tasks such as Google Maps, scheduling, and translation is highlighted, with personal anecdotes about using a translation app.
The economic impact of AI is discussed, with personal examples of how translation apps have increased opportunities for communication and work.
AI's potential to increase GDP unevenly and exacerbate existing inequalities is a concern raised, emphasizing the need for policy to address these issues.
The dangers of AI development, such as societal shifts, disinformation, and bias, are compared to the historical neglect of side effects in the industrial revolution.
Concerns about companies like OpenAI and Microsoft being in charge of public policy and national security issues due to their AI developments.
The comparison of building AGI to the Manhattan Project, warning of the uncontrollable nature of such an intelligence.
The argument that AGI could lead to most people becoming unemployed, as machines would outperform humans in all general tasks.
The need for a nonproliferation agreement for AGI development, similar to those for nuclear weapons, to prevent uncontrollable outcomes.
The potential for an arms race in AGI development that could have catastrophic consequences for humanity.
The importance of regulation and supervision in AGI development to prevent misuse and ensure safety.
AI's potential to solve practical problems is contrasted with the risks of creating an uncontrollable superintelligence.
The paradox of companies investing in AGI despite acknowledging its potential for harm, and the lack of framework for responsible development.
The call for collaboration and alignment in AI development to prevent negative side effects and ensure beneficial outcomes.
A cautious optimism about the potential of AI to solve major problems, coupled with pessimism about the likelihood of voluntary collaboration.
The impact of political changes, particularly in the United States, on the future of AI regulation and development.
The campaign against the development of AGI without proper oversight, emphasizing the need for licensing and permission.
Transcripts
the Adent of artificial general
intelligence is potentially five years
away I always say 2030 to be safe but
what I called IIA intelligent assistance
that goes from controlling emissions uh
protein folding all the Practical things
that we think that machines should be
doing because they're they're better and
faster so Google Maps scheduling
appointments translating I use a
translation app called Ras to translate
most of my keynote videos at Spanish and
Portuguese and that's made a huge
difference for me it's not perfect but
it works and my learning has been that
basically the Practical stuff that kind
of not and bols routine AI will do a
great job making life more efficient and
easier and that's not really dangerous
by itself that's basically just
dangerous as a consequence of our
routines changing but it has great
economic impact so that's good and I
have lots of great examples for that and
this is what I keep telling people
rather than thinking about ex machina
and domination and Consciousness and
human agency we should think about how
we can use better software which is
intelligent assistance and you mentioned
a great economic impact is it possible
to quantify this economic impact yeah I
just taking my own example the fact that
I can now speak Portuguese and and
Spanish and Finnish and even Hindi
increases my chance of speaking to
people in those places and they know
that I don't speak Spanish but it gets
them to understand what I'm saying and
then when there's a gig they have a
translator right so it increases my
economic possibility and then there's
other things like you know I work with
lawyers the lawyers are saying okay
we've got a 550 page PDF about the
lawsuit about some real estate Affairs
in Miami and now they upload this to
Google learning Google notebook and it
will summarize the deal points for you
in in 14 seconds and you may not be
aware that they're not the right deal
points but it it's a head start has
great economic impact because it's like
a super tool 10 years ago if you weren't
on the cloud and now you're on the cloud
because the cloud is faster and you can
use a mobile phone to access everything
so it's like that basically and I think
the forecast shows that in some jobs it
it could be 3 four 5x efficiency which
is not always good but basically what
happens is that uh you can get very good
at something very quickly if you want to
be very good like a writer or creative
person it's still the same job but you
have better tools and the learning has
been for me a person with the better
tools usually gets a better
job GP there's the prediction that the
AI can can really increase the world's
GDP right you think that is correct that
is that is correct the problem is that
it increases GDP unevenly so it
increases our GDP because we we are
already in the PO position of increase
we already increasing I mean if you're
the top 10% which we belong to no doubt
you then then you're going to increase
because this is just the way that the
polarization of capital works and
technology is capital ultimately right
the hard part will be to figure out how
to make it even enough for everybody
else so that they are doing the sort of
moreal jobs or commodity jobs they have
to be uplifted and that's a policy issue
and the problem that we're seeing right
now primarily with intelligence systems
is that we are not looking at this far
enough we're looking at the short-term
things and the immediate boosts and then
you know side effects are not
interesting it's like oil and gas and
coal you know we had the industrial
society and side effects were somebody
else's
problem but now the side effects will be
you of societal shifts of disinformation
of bias and all of these things and this
is what worries me about companies like
open Ai and Microsoft is that they are
kind of in charge of this now I mean
they are in charge of public policy the
military Now by by extension because
basically this is now a national
security issue not how we going to use
Simple software to make appointments or
so but how you inform people and how you
run
databases and how you of course run
drones and things like that I mean these
are issues that are not issues of
private companies it's not like sap
would be responsible for the future of
humanity and this is kind of what's
happening is that Microsoft and open air
are out there saying we're building an
artificial
superintelligence the AGI yeah and and
that that that's just not a good idea
because the extreme version of that is a
machine that is impossible to control
it's like say okay we're going to invent
a nuclear bomb the Manhattan Project
we're going to have a Manhattan project
for AI and then we're hoping that nobody
will use it and no that's not what's
going to happen right just like the manh
project it will get used and it won't be
good there are other analysts that think
it's fear mongering and they think that
human intelligence is more than this
zeros and ones of artificial in I agree
I agree with all of that but the problem
is that this intelligence that we're
building is not like a human it's
Superior because it is a machine right
so there's a limit to how much Computing
you can do in your brain you can't
expand your brain by 5x there's no room
for that you can have faster connections
between the neurons those are all things
that are physical limitations computers
have none of that as systems are are
growing they are going to become
infinitely more powerful at the most
basic computational jobs than us that's
just the nature of Technology there's no
limit to that right with us there are
limits so right now we're still kind of
even some people will say not quite even
yet but soon okay but but the potential
of AI to be Computing faster is clearly
there and and we can see that in front
of us it's not that the AI has to be
evil to cause the damage it can be done
by for example complete dependency if I
have as people are propagating like
Microsoft and others if I have an AI
digital assistant like Siri and this
assistant is super intelligent and
becomes like a person to me which is the
goal of course then I become utterly
dependent on that and like I'm dependent
on the iPhone but but times 1,000 and so
I could not really do things anymore and
I wouldn't want to unplug it because
it's doing all these things for
me and so that is a much higher level of
dependency than a computer or an iPhone
or an iPad so what do we do what what is
your advice I guess for governments and
for for users and companies first I
think on the top level of the
existential question we need a
nonproliferation agreement we need an
agreement that says we're going to build
super intelligence with the IQ of a
billion there's almost 99.9
99% certainty of that going bad because
it's uncontrollable and
self-replicating do you feel like the
blocks are are close to that because it
doesn't seem like the United States
Europe and you know China and other
asia-pacific countries will meet and
confer about this it's like nuclear
weapons what is the alternative an arms
race of AGI would kill us all instantly
basically you can think of in different
ways but the reason that something comes
to pass like this because something
happens usually like way if a stock
market crash because of AI very likely
to happen or we could have a air traffic
control crash based on some problem with
AI do you see there Union interested in
in doing something proactively before
something like that Happ yes this is the
whole purpose behind the AI Act is to
establish those rules and say okay
here's four categories one total noo one
go only if and the other one go if you
share and the other one is free for all
and 95% of things are free to go because
they're business applications right so I
mean I we care about those things but
they're not
existential so if my job is 40% eroded
because there's
automation we'll figure something out
but it's not existential to the world
somebody once said we don't regulate a
hammer because I can kill somebody with
a hammer so we don't have regulation on
hammers but hammers can be used to
kill the magnitude of a hammer is
nothing compared to the magnitude of AI
these are tools that can kill or they
can build but we need to figure out a
way that the killing POS potential is
greatly reduced and this is definitely a
big issue and now what we're doing now
is we're saying okay in the name of
business progress techno optimism we're
going to let these companies build
whatever they can build yeah so you
would say that this information deep
fakes and this more existential threat
are the biggest negatives of the current
development of AI let's put it this way
if you're going to shoot to build an
artificial general intelligence you're
essentially shooting to make most people
unemployed you know that's what it means
it's a machine for unemployment if a
machine gets generally intelligent means
it understands everything in real time
the entire internet every communication
every person everything that has ever
happened the machine understands this
sense of every language and every
possible meaning and so that means that
that a lot of work that we think of as
human work would cease to be we can't
compete because they be essentially free
the adment of artificial general
intelligence is potentially five years
away I always say 2030 to be safe but
that's basically what it is and and you
know there's trillions of dollars I mean
this is the biggest raise of money that
you've ever seen in technology it's
bigger than anything we have ever
invested in climate change it's so now
we have two big Pinnacles one is green
everything that's all the money goes
there climate technology and AI The
Challenge on both of those things is
that we're are doing this not to
necessarily increase flourishing but to
increase profit and so talk to me about
what is this project that you have yeah
it's a campaign that I'm working on you
need to be commissioned licensed
supervised to build something that is
equal of the nuclear bomb because it's
pretty hard to build a nuclear bomb but
not an AI especially when it's open
source so you're building something like
this you're running something like this
for a profit company you need to be
subject to regulation and supervision
and there needs to be a nonproliferation
agreement not every country can have
their own AGI it's not that this is
possible next month but in scientific
terms it's definitely possible because
we're on this exponential chart of more
data more computing power more G PPU and
the Paradox is many of those companies
have openly said for for years now that
this could be happen or hell and it's
just kind of accepted that they can
dabble with this right so I think we
need to have a framework that says you
have to be licensed to do that you have
to have permission you have to
collaborate and it's not just about
selling more products sometimes it's
good to forego options like to say we're
going to build IIA we're going to use
artificial elligence we're going to use
it for business we're going to use it to
solve problems but we're not going to
build an OM poent digital Network so
that we can become the second tier
intelligence in your conversations with
other futurists what is the common sense
are people on board with your vision do
they have other ideas that they're
sharing with you I think people are
buying larger board with this concept to
make that happen we have to think wider
and there are quite a few people
thinking wider but if they prove to
think why do they get punished by the
stock market as a result the two biggest
companies in the world Nvidia making the
juice for the AI and the second one
Saudi Arabian oil company and this kind
of logic of profit above everything
that's the end period of humanity
because basically what it does it it
brings it to the top and then there's
too many problems and it all collapses
together because we've never fixed any
of the problems in like climate change
because we have this one of object
different so a lot of people are saying
we would if we could go down this
direction and then we have the alimer
effect among the scientist friends and
researcher friends and AI friends that I
have of saying you know I'm just doing
what is scientifically possible because
it needs to be done you know it's
progress I'm saying no that's not how it
works because you're going to make
something that somebody else will decide
how to use and you have nothing to say
about it so to finish let's just
establish if you are still an optimist a
cautious Optimist or are you very
pessimistic for instance this is a very
big election year things can go in very
different ways here in the United States
and the future of AI will depend on that
because whoever's in the white house
will have a different approach to
regulation and everything else how do
you characterize yourself right now
cautious Optimist or completely
pessimistic right now I would say it
will get worse before it gets better
that is because people Fear the Future
because they don't feel a lot of Hope
and they and they rack up all the
negative stories I think when're we're
entering a period of this year where
there will be some loss and some wind
there will be more polarization but I'm
hopeful exactly because we are at the
Pinnacle of new Solutions coming in with
science and technology that we could use
to solve pretty much every practical
problem but we're not spending nearly
enough time on alignment and on
collaboration we're spending trillions
of dollars on progress and inventing
things we don't look at everything else
we just go ahead and everybody does
their own think and that won't work
because there are too many side effects
so that's my biggest worry so I'm
extremely optimistic there that we can
solve things like cancer or water or or
or power energy but I'm pessimistic on
us voluntarily coming together and
collaborate so this can end well that's
why I this the campaign of denying AGI
is to say that if you're pursuing this
if you're creating a digital
entity that looks to supered us this is
not something that this private company
business that's like messing with
Humanity
Voir Plus de Vidéos Connexes
AGI Before 2026? Sam Altman & Max Tegmark on Humanity's Greatest Challenge
Cosa Ăš un'AGI? Vediamo a che punto siamo!
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
How to get empowered, not overpowered, by AI | Max Tegmark
Ex-OpenAI Employee LEAKED DOC TO CONGRESS!
Nick Bostrom What happens when our computers get smarter than we are?
5.0 / 5 (0 votes)