AGI by 2030: Gerd Leonhard Interview on Artificial General Intelligence

Gerd Leonhard
18 Jul 202414:47

Summary

TLDRThe speaker discusses the potential of artificial general intelligence (AGI), predicting its advent around 2030, and emphasizes the need for regulation to prevent misuse. They highlight the economic benefits of intelligent assistance in practical tasks but warn of the risks of dependence and societal shifts. The speaker advocates for a nonproliferation agreement to prevent uncontrollable AGI development and stresses the importance of collaboration and alignment in AI progress.

Takeaways

  • 🔍 The advent of artificial general intelligence (AGI) could be as close as five years away, with a conservative estimate being around 2030.
  • 🤖 Intelligent assistance, which includes practical applications like controlling emissions, protein folding, scheduling appointments, and translation, is already making life more efficient and is not inherently dangerous.
  • 🌐 The economic impact of AI is significant, as it can increase the GDP, but it does so unevenly, benefiting those already in advantageous positions more than others.
  • 📈 AI can drastically increase efficiency in various jobs, potentially leading to a 3-5x improvement, but this also raises concerns about job displacement and economic inequality.
  • 🌐 The development of AGI could lead to most people becoming unemployed, as machines would be able to understand and perform tasks across all domains, making human work redundant.
  • 🚫 There is a call for a nonproliferation agreement for AGI, similar to nuclear weapons, to prevent uncontrollable and self-replicating superintelligence from being developed.
  • 🌐 The current trajectory of AI development is driven by profit, often at the expense of broader societal and environmental considerations, which could lead to significant negative consequences.
  • 🌐 Companies like Microsoft and OpenAI are seen as being in charge of public policy and national security by extension, due to their influence over AI development and its potential impact on society.
  • 🚀 The speaker advocates for a cautious approach to AI development, emphasizing the need for regulation, collaboration, and a focus on solving practical problems rather than pursuing AGI.
  • 🌐 The speaker expresses a cautious optimism about the potential of AI to solve major global problems like cancer, water scarcity, or energy issues, but is pessimistic about the likelihood of voluntary collaboration and alignment in AI development.

Q & A

  • When does the speaker predict the advent of artificial general intelligence (AGI)?

    -The speaker predicts that the advent of AGI could be as close as five years away, but suggests 2030 as a safer estimate.

  • What is the speaker's view on the term 'intelligent assistance'?

    -The speaker refers to 'intelligent assistance' as AI that can handle practical tasks such as controlling emissions, protein folding, scheduling appointments, and translation, which are beneficial and not inherently dangerous.

  • How does the speaker use AI in his personal life?

    -The speaker uses a translation app called Ras to translate his keynote videos into Spanish and Portuguese, which has made a significant difference in his ability to communicate with a broader audience.

  • What economic impact does the speaker foresee from the use of intelligent assistance?

    -The speaker believes that intelligent assistance can increase economic possibilities, such as enabling him to speak multiple languages and summarize legal documents quickly, similar to the impact of cloud technology.

  • What is the speaker's concern about the uneven increase in GDP due to AI?

    -The speaker is concerned that the increase in GDP due to AI will be uneven, benefiting those who are already in a position to increase their wealth, and potentially exacerbating economic polarization.

  • How does the speaker view the role of companies like Microsoft and Open AI in the development of AI?

    -The speaker is worried that companies like Microsoft and Open AI are in charge of public policy and national security issues related to AI, which he believes should not be the responsibility of private companies.

  • What is the speaker's stance on the development of superintelligence?

    -The speaker is against the development of superintelligence, comparing it to the invention of the nuclear bomb, and believes it could lead to uncontrollable and dangerous consequences.

  • What advice does the speaker have for governments, users, and companies regarding AI?

    -The speaker advises that there should be a nonproliferation agreement for building superintelligence, similar to regulations on nuclear weapons, and that companies should be licensed and supervised in their AI development.

  • What is the speaker's view on the potential societal impacts of AGI?

    -The speaker is concerned about the potential societal impacts of AGI, such as unemployment, dependency on AI, and the side effects of AI like disinformation and bias.

  • What is the speaker's current outlook on the future of AI?

    -The speaker characterizes himself as a cautious optimist, believing that while AI can solve many practical problems, there is a need for more collaboration and alignment to prevent negative consequences.

  • What is the speaker's campaign about?

    -The speaker is campaigning for a framework that requires licensing and permission for companies to build AGI, emphasizing the need for regulation and collaboration to prevent misuse.

Outlines

00:00

🤖 The Economic Impact of Intelligent Assistance

The speaker discusses the potential arrival of artificial general intelligence (AGI) within five years, emphasizing the current role of intelligent assistance (IA) in practical applications such as controlling emissions, protein folding, scheduling appointments, and translation. The use of translation apps exemplifies how IA is making life more efficient, with economic benefits arising from increased communication capabilities. The speaker also highlights the potential for AI to increase GDP unevenly, favoring those already in advantageous positions, and the need to consider the broader societal and policy implications of AI development, particularly regarding companies like Open AI and Microsoft that are shaping public policy and national security.

05:00

🚀 The Unequal Growth of AI and Economic Disparity

This paragraph delves into the potential of AI to outperform human intelligence in computational tasks due to its lack of physical limitations. The speaker warns of the risks associated with dependency on AI, such as becoming overly reliant on digital assistants, which could lead to a loss of personal autonomy. The advice given to governments, users, and companies includes the need for a nonproliferation agreement to prevent the uncontrollable development of super intelligence. The speaker also addresses the challenges of regulating AI, comparing it to the regulation of nuclear weapons, and the importance of establishing rules to prevent existential risks associated with AI.

10:03

🌐 The Ethical and Regulatory Challenges of AI Development

The speaker presents a campaign advocating for the regulation and licensing of AGI development, likening it to the control of nuclear weapons due to its potential for catastrophic consequences. They argue that the pursuit of profit in AI development without considering its broader impact is irresponsible and could lead to societal collapse. The speaker calls for a shift in focus from profit to collaboration and alignment in AI development to ensure a positive outcome. They express a cautious optimism about the potential of AI to solve major problems but are pessimistic about the voluntary collaboration needed to achieve this without negative consequences.

Mindmap

Keywords

💡Artificial General Intelligence (AGI)

Artificial General Intelligence refers to a theoretical form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of a human being. In the video, AGI is discussed as a potentially dangerous development due to its uncontrollable and self-replicating nature, which could lead to existential risks for humanity. The speaker suggests that AGI could be developed within five years, emphasizing the urgency of establishing regulations and nonproliferation agreements to manage its development.

💡Intelligent Assistance (IA)

Intelligent Assistance, as mentioned in the script, is a form of AI that performs practical tasks such as controlling emissions, protein folding, scheduling appointments, and translating languages. It is distinguished from AGI in that it is designed for specific, routine tasks rather than general intelligence. The speaker uses the example of a translation app called Ras to illustrate how IA can improve efficiency and ease of life, without posing the same existential risks as AGI.

💡Economic Impact

The economic impact of AI, particularly in the context of the video, refers to the influence of AI on the economy, including job displacement, increased productivity, and the potential for wealth generation. The speaker discusses the positive economic impact of IA, such as expanding business opportunities by enabling communication in multiple languages, and the potential for AI to increase GDP unevenly, which could exacerbate existing inequalities.

💡Regulation

Regulation in the context of the video pertains to the establishment of rules and oversight to govern the development and use of AI technologies. The speaker advocates for a nonproliferation agreement for AGI, similar to those for nuclear weapons, to prevent its uncontrolled proliferation and potential misuse. The concept of regulation is central to the speaker's argument for a cautious and controlled approach to AI development.

💡Existential Risk

Existential risk is a term used to describe threats that could lead to the extinction or near-extinction of humanity. In the video, the speaker identifies AGI as an existential risk due to its potential to become uncontrollable and self-replicating, leading to unforeseen and potentially catastrophic consequences. The speaker emphasizes the need for preemptive measures to mitigate these risks.

💡Dependence

Dependence, as discussed in the video, refers to the reliance on AI systems to the extent that individuals or societies could become unable to function without them. The speaker warns of the dangers of becoming overly dependent on AI, such as digital assistants, which could lead to a loss of autonomy and an increased vulnerability to AI failures or manipulation.

💡Unemployment

Unemployment in the context of the video is the potential job displacement that could result from the widespread adoption of AGI. The speaker suggests that AGI could render many jobs obsolete, as machines would be able to perform tasks more efficiently and at no cost, leading to widespread unemployment and social upheaval.

💡National Security

National security, as mentioned in the script, is the concept of safeguarding a nation against threats to its sovereignty, unity, and territorial integrity. The speaker expresses concern that companies like Microsoft and OpenAI are developing AI technologies that could have significant implications for national security, as they could be used for military purposes or to control information dissemination.

💡Polarization

Polarization refers to the division of opinions into opposing groups with little common ground. In the video, the speaker discusses the potential for AI to exacerbate existing social and economic polarization, as the benefits of AI development are unevenly distributed, leading to increased inequality and social unrest.

💡Collaboration

Collaboration is the act of working together, particularly in the context of the video, it refers to the need for global cooperation in the development and regulation of AI technologies. The speaker argues that voluntary collaboration is essential to ensure that AI development aligns with the interests of humanity as a whole, rather than being driven solely by profit motives.

💡Side Effects

Side effects in the context of the video refer to the unintended consequences of AI development, such as disinformation, bias, and societal shifts. The speaker warns that the focus on short-term gains and immediate benefits of AI could lead to neglecting these side effects, which could have serious implications for society.

Highlights

The advent of artificial general intelligence (AGI) could be as close as five years away, with a conservative estimate of 2030.

The concept of Intelligent Assistance (IA) is introduced, which involves AI in practical applications like controlling emissions and protein folding.

AI's role in enhancing efficiency in everyday tasks such as Google Maps, scheduling, and translation is highlighted, with personal anecdotes about using a translation app.

The economic impact of AI is discussed, with personal examples of how translation apps have increased opportunities for communication and work.

AI's potential to increase GDP unevenly and exacerbate existing inequalities is a concern raised, emphasizing the need for policy to address these issues.

The dangers of AI development, such as societal shifts, disinformation, and bias, are compared to the historical neglect of side effects in the industrial revolution.

Concerns about companies like OpenAI and Microsoft being in charge of public policy and national security issues due to their AI developments.

The comparison of building AGI to the Manhattan Project, warning of the uncontrollable nature of such an intelligence.

The argument that AGI could lead to most people becoming unemployed, as machines would outperform humans in all general tasks.

The need for a nonproliferation agreement for AGI development, similar to those for nuclear weapons, to prevent uncontrollable outcomes.

The potential for an arms race in AGI development that could have catastrophic consequences for humanity.

The importance of regulation and supervision in AGI development to prevent misuse and ensure safety.

AI's potential to solve practical problems is contrasted with the risks of creating an uncontrollable superintelligence.

The paradox of companies investing in AGI despite acknowledging its potential for harm, and the lack of framework for responsible development.

The call for collaboration and alignment in AI development to prevent negative side effects and ensure beneficial outcomes.

A cautious optimism about the potential of AI to solve major problems, coupled with pessimism about the likelihood of voluntary collaboration.

The impact of political changes, particularly in the United States, on the future of AI regulation and development.

The campaign against the development of AGI without proper oversight, emphasizing the need for licensing and permission.

Transcripts

play00:00

the Adent of artificial general

play00:02

intelligence is potentially five years

play00:03

away I always say 2030 to be safe but

play00:06

what I called IIA intelligent assistance

play00:08

that goes from controlling emissions uh

play00:11

protein folding all the Practical things

play00:13

that we think that machines should be

play00:15

doing because they're they're better and

play00:17

faster so Google Maps scheduling

play00:19

appointments translating I use a

play00:22

translation app called Ras to translate

play00:24

most of my keynote videos at Spanish and

play00:26

Portuguese and that's made a huge

play00:28

difference for me it's not perfect but

play00:30

it works and my learning has been that

play00:33

basically the Practical stuff that kind

play00:35

of not and bols routine AI will do a

play00:38

great job making life more efficient and

play00:41

easier and that's not really dangerous

play00:43

by itself that's basically just

play00:45

dangerous as a consequence of our

play00:47

routines changing but it has great

play00:49

economic impact so that's good and I

play00:51

have lots of great examples for that and

play00:53

this is what I keep telling people

play00:55

rather than thinking about ex machina

play00:56

and domination and Consciousness and

play00:58

human agency we should think about how

play01:01

we can use better software which is

play01:03

intelligent assistance and you mentioned

play01:05

a great economic impact is it possible

play01:07

to quantify this economic impact yeah I

play01:10

just taking my own example the fact that

play01:12

I can now speak Portuguese and and

play01:14

Spanish and Finnish and even Hindi

play01:16

increases my chance of speaking to

play01:18

people in those places and they know

play01:21

that I don't speak Spanish but it gets

play01:23

them to understand what I'm saying and

play01:26

then when there's a gig they have a

play01:27

translator right so it increases my

play01:30

economic possibility and then there's

play01:31

other things like you know I work with

play01:33

lawyers the lawyers are saying okay

play01:35

we've got a 550 page PDF about the

play01:37

lawsuit about some real estate Affairs

play01:39

in Miami and now they upload this to

play01:42

Google learning Google notebook and it

play01:45

will summarize the deal points for you

play01:47

in in 14 seconds and you may not be

play01:51

aware that they're not the right deal

play01:52

points but it it's a head start has

play01:55

great economic impact because it's like

play01:57

a super tool 10 years ago if you weren't

play02:00

on the cloud and now you're on the cloud

play02:02

because the cloud is faster and you can

play02:05

use a mobile phone to access everything

play02:06

so it's like that basically and I think

play02:09

the forecast shows that in some jobs it

play02:12

it could be 3 four 5x efficiency which

play02:16

is not always good but basically what

play02:18

happens is that uh you can get very good

play02:21

at something very quickly if you want to

play02:24

be very good like a writer or creative

play02:26

person it's still the same job but you

play02:28

have better tools and the learning has

play02:31

been for me a person with the better

play02:32

tools usually gets a better

play02:36

job GP there's the prediction that the

play02:39

AI can can really increase the world's

play02:41

GDP right you think that is correct that

play02:43

is that is correct the problem is that

play02:45

it increases GDP unevenly so it

play02:48

increases our GDP because we we are

play02:51

already in the PO position of increase

play02:54

we already increasing I mean if you're

play02:56

the top 10% which we belong to no doubt

play02:59

you then then you're going to increase

play03:01

because this is just the way that the

play03:03

polarization of capital works and

play03:06

technology is capital ultimately right

play03:08

the hard part will be to figure out how

play03:10

to make it even enough for everybody

play03:12

else so that they are doing the sort of

play03:15

moreal jobs or commodity jobs they have

play03:18

to be uplifted and that's a policy issue

play03:20

and the problem that we're seeing right

play03:22

now primarily with intelligence systems

play03:24

is that we are not looking at this far

play03:26

enough we're looking at the short-term

play03:28

things and the immediate boosts and then

play03:31

you know side effects are not

play03:32

interesting it's like oil and gas and

play03:34

coal you know we had the industrial

play03:36

society and side effects were somebody

play03:38

else's

play03:39

problem but now the side effects will be

play03:42

you of societal shifts of disinformation

play03:46

of bias and all of these things and this

play03:49

is what worries me about companies like

play03:52

open Ai and Microsoft is that they are

play03:54

kind of in charge of this now I mean

play03:57

they are in charge of public policy the

play03:59

military Now by by extension because

play04:02

basically this is now a national

play04:03

security issue not how we going to use

play04:06

Simple software to make appointments or

play04:08

so but how you inform people and how you

play04:12

run

play04:13

databases and how you of course run

play04:16

drones and things like that I mean these

play04:18

are issues that are not issues of

play04:21

private companies it's not like sap

play04:23

would be responsible for the future of

play04:24

humanity and this is kind of what's

play04:26

happening is that Microsoft and open air

play04:28

are out there saying we're building an

play04:31

artificial

play04:32

superintelligence the AGI yeah and and

play04:35

that that that's just not a good idea

play04:37

because the extreme version of that is a

play04:40

machine that is impossible to control

play04:43

it's like say okay we're going to invent

play04:45

a nuclear bomb the Manhattan Project

play04:47

we're going to have a Manhattan project

play04:48

for AI and then we're hoping that nobody

play04:50

will use it and no that's not what's

play04:53

going to happen right just like the manh

play04:54

project it will get used and it won't be

play04:57

good there are other analysts that think

play05:00

it's fear mongering and they think that

play05:02

human intelligence is more than this

play05:05

zeros and ones of artificial in I agree

play05:08

I agree with all of that but the problem

play05:10

is that this intelligence that we're

play05:12

building is not like a human it's

play05:16

Superior because it is a machine right

play05:18

so there's a limit to how much Computing

play05:20

you can do in your brain you can't

play05:22

expand your brain by 5x there's no room

play05:24

for that you can have faster connections

play05:26

between the neurons those are all things

play05:28

that are physical limitations computers

play05:30

have none of that as systems are are

play05:33

growing they are going to become

play05:35

infinitely more powerful at the most

play05:37

basic computational jobs than us that's

play05:40

just the nature of Technology there's no

play05:42

limit to that right with us there are

play05:43

limits so right now we're still kind of

play05:45

even some people will say not quite even

play05:48

yet but soon okay but but the potential

play05:50

of AI to be Computing faster is clearly

play05:54

there and and we can see that in front

play05:56

of us it's not that the AI has to be

play05:58

evil to cause the damage it can be done

play06:01

by for example complete dependency if I

play06:04

have as people are propagating like

play06:06

Microsoft and others if I have an AI

play06:09

digital assistant like Siri and this

play06:12

assistant is super intelligent and

play06:15

becomes like a person to me which is the

play06:17

goal of course then I become utterly

play06:20

dependent on that and like I'm dependent

play06:23

on the iPhone but but times 1,000 and so

play06:27

I could not really do things anymore and

play06:29

I wouldn't want to unplug it because

play06:31

it's doing all these things for

play06:33

me and so that is a much higher level of

play06:36

dependency than a computer or an iPhone

play06:38

or an iPad so what do we do what what is

play06:41

your advice I guess for governments and

play06:44

for for users and companies first I

play06:46

think on the top level of the

play06:48

existential question we need a

play06:51

nonproliferation agreement we need an

play06:53

agreement that says we're going to build

play06:55

super intelligence with the IQ of a

play06:57

billion there's almost 99.9

play07:00

99% certainty of that going bad because

play07:04

it's uncontrollable and

play07:07

self-replicating do you feel like the

play07:09

blocks are are close to that because it

play07:11

doesn't seem like the United States

play07:13

Europe and you know China and other

play07:16

asia-pacific countries will meet and

play07:19

confer about this it's like nuclear

play07:21

weapons what is the alternative an arms

play07:24

race of AGI would kill us all instantly

play07:28

basically you can think of in different

play07:30

ways but the reason that something comes

play07:32

to pass like this because something

play07:33

happens usually like way if a stock

play07:35

market crash because of AI very likely

play07:38

to happen or we could have a air traffic

play07:41

control crash based on some problem with

play07:43

AI do you see there Union interested in

play07:46

in doing something proactively before

play07:48

something like that Happ yes this is the

play07:51

whole purpose behind the AI Act is to

play07:53

establish those rules and say okay

play07:55

here's four categories one total noo one

play07:58

go only if and the other one go if you

play08:01

share and the other one is free for all

play08:04

and 95% of things are free to go because

play08:07

they're business applications right so I

play08:09

mean I we care about those things but

play08:11

they're not

play08:12

existential so if my job is 40% eroded

play08:15

because there's

play08:16

automation we'll figure something out

play08:18

but it's not existential to the world

play08:21

somebody once said we don't regulate a

play08:24

hammer because I can kill somebody with

play08:26

a hammer so we don't have regulation on

play08:28

hammers but hammers can be used to

play08:31

kill the magnitude of a hammer is

play08:33

nothing compared to the magnitude of AI

play08:36

these are tools that can kill or they

play08:37

can build but we need to figure out a

play08:40

way that the killing POS potential is

play08:43

greatly reduced and this is definitely a

play08:46

big issue and now what we're doing now

play08:48

is we're saying okay in the name of

play08:50

business progress techno optimism we're

play08:54

going to let these companies build

play08:56

whatever they can build yeah so you

play08:58

would say that this information deep

play08:59

fakes and this more existential threat

play09:02

are the biggest negatives of the current

play09:04

development of AI let's put it this way

play09:07

if you're going to shoot to build an

play09:08

artificial general intelligence you're

play09:11

essentially shooting to make most people

play09:13

unemployed you know that's what it means

play09:16

it's a machine for unemployment if a

play09:18

machine gets generally intelligent means

play09:20

it understands everything in real time

play09:23

the entire internet every communication

play09:25

every person everything that has ever

play09:27

happened the machine understands this

play09:29

sense of every language and every

play09:31

possible meaning and so that means that

play09:34

that a lot of work that we think of as

play09:36

human work would cease to be we can't

play09:40

compete because they be essentially free

play09:42

the adment of artificial general

play09:44

intelligence is potentially five years

play09:46

away I always say 2030 to be safe but

play09:48

that's basically what it is and and you

play09:50

know there's trillions of dollars I mean

play09:51

this is the biggest raise of money that

play09:53

you've ever seen in technology it's

play09:56

bigger than anything we have ever

play09:57

invested in climate change it's so now

play09:59

we have two big Pinnacles one is green

play10:02

everything that's all the money goes

play10:04

there climate technology and AI The

play10:07

Challenge on both of those things is

play10:09

that we're are doing this not to

play10:11

necessarily increase flourishing but to

play10:14

increase profit and so talk to me about

play10:16

what is this project that you have yeah

play10:18

it's a campaign that I'm working on you

play10:20

need to be commissioned licensed

play10:23

supervised to build something that is

play10:26

equal of the nuclear bomb because it's

play10:28

pretty hard to build a nuclear bomb but

play10:30

not an AI especially when it's open

play10:32

source so you're building something like

play10:34

this you're running something like this

play10:36

for a profit company you need to be

play10:40

subject to regulation and supervision

play10:42

and there needs to be a nonproliferation

play10:44

agreement not every country can have

play10:46

their own AGI it's not that this is

play10:49

possible next month but in scientific

play10:53

terms it's definitely possible because

play10:54

we're on this exponential chart of more

play10:57

data more computing power more G PPU and

play11:00

the Paradox is many of those companies

play11:02

have openly said for for years now that

play11:06

this could be happen or hell and it's

play11:08

just kind of accepted that they can

play11:09

dabble with this right so I think we

play11:12

need to have a framework that says you

play11:14

have to be licensed to do that you have

play11:15

to have permission you have to

play11:17

collaborate and it's not just about

play11:20

selling more products sometimes it's

play11:22

good to forego options like to say we're

play11:27

going to build IIA we're going to use

play11:28

artificial elligence we're going to use

play11:30

it for business we're going to use it to

play11:31

solve problems but we're not going to

play11:33

build an OM poent digital Network so

play11:37

that we can become the second tier

play11:40

intelligence in your conversations with

play11:42

other futurists what is the common sense

play11:44

are people on board with your vision do

play11:46

they have other ideas that they're

play11:48

sharing with you I think people are

play11:51

buying larger board with this concept to

play11:53

make that happen we have to think wider

play11:55

and there are quite a few people

play11:57

thinking wider but if they prove to

play11:59

think why do they get punished by the

play12:00

stock market as a result the two biggest

play12:03

companies in the world Nvidia making the

play12:06

juice for the AI and the second one

play12:08

Saudi Arabian oil company and this kind

play12:11

of logic of profit above everything

play12:15

that's the end period of humanity

play12:18

because basically what it does it it

play12:19

brings it to the top and then there's

play12:21

too many problems and it all collapses

play12:23

together because we've never fixed any

play12:25

of the problems in like climate change

play12:27

because we have this one of object

play12:29

different so a lot of people are saying

play12:30

we would if we could go down this

play12:32

direction and then we have the alimer

play12:35

effect among the scientist friends and

play12:38

researcher friends and AI friends that I

play12:40

have of saying you know I'm just doing

play12:42

what is scientifically possible because

play12:45

it needs to be done you know it's

play12:47

progress I'm saying no that's not how it

play12:49

works because you're going to make

play12:51

something that somebody else will decide

play12:54

how to use and you have nothing to say

play12:56

about it so to finish let's just

play12:58

establish if you are still an optimist a

play13:01

cautious Optimist or are you very

play13:04

pessimistic for instance this is a very

play13:06

big election year things can go in very

play13:08

different ways here in the United States

play13:10

and the future of AI will depend on that

play13:12

because whoever's in the white house

play13:14

will have a different approach to

play13:16

regulation and everything else how do

play13:19

you characterize yourself right now

play13:20

cautious Optimist or completely

play13:23

pessimistic right now I would say it

play13:25

will get worse before it gets better

play13:28

that is because people Fear the Future

play13:30

because they don't feel a lot of Hope

play13:32

and they and they rack up all the

play13:33

negative stories I think when're we're

play13:36

entering a period of this year where

play13:38

there will be some loss and some wind

play13:40

there will be more polarization but I'm

play13:42

hopeful exactly because we are at the

play13:45

Pinnacle of new Solutions coming in with

play13:48

science and technology that we could use

play13:51

to solve pretty much every practical

play13:53

problem but we're not spending nearly

play13:56

enough time on alignment and on

play13:59

collaboration we're spending trillions

play14:01

of dollars on progress and inventing

play14:03

things we don't look at everything else

play14:05

we just go ahead and everybody does

play14:07

their own think and that won't work

play14:09

because there are too many side effects

play14:11

so that's my biggest worry so I'm

play14:13

extremely optimistic there that we can

play14:15

solve things like cancer or water or or

play14:18

or power energy but I'm pessimistic on

play14:22

us voluntarily coming together and

play14:25

collaborate so this can end well that's

play14:28

why I this the campaign of denying AGI

play14:32

is to say that if you're pursuing this

play14:35

if you're creating a digital

play14:38

entity that looks to supered us this is

play14:41

not something that this private company

play14:43

business that's like messing with

play14:45

Humanity

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
Artificial IntelligenceEconomic ImpactRegulatory ConcernsTechnological ProgressFuture PredictionsAI EthicsGlobal GDPDigital AssistantsAGI DevelopmentExistential Risk
¿Necesitas un resumen en inglés?