AI: What is the future of artificial intelligence? - BBC News

BBC News
21 Apr 202316:38

Summary

TLDRIn this discussion, experts Evan Burfield and Gary Marcus debate the need for global governance of AI to prevent job loss, misinformation, and potential threats posed by AI. They argue for a coordinated approach to AI regulation and research, warning against waiting to act until problems arise. The conversation highlights the urgent need for policymakers to understand and prepare for AI's societal impacts, particularly in the upcoming 2024 U.S. election cycle.

Takeaways

  • 🤖 There is a growing concern about the rapid advancement of AI and its potential societal impacts.
  • 🚫 Elon Musk and other influential figures are calling for a temporary halt in AI development to better understand its implications.
  • 🌐 Professor Gary Marcus advocates for global governance of AI, similar to the International Atomic Energy Authority, to prevent fragmented and ineffective policies.
  • 🧠 Evan Burfield highlights the difficulty of enforcing a moratorium on AI development due to competitive pressures and varying levels of compliance.
  • 📈 The potential of AI to significantly disrupt society and democracy is a major concern, with the 2024 political cycle being a critical timeframe for impact.
  • 🛑 The idea of a moratorium is not only about regulation but also about preparing society for AI's wide-ranging effects.
  • 📚 Lessons from the internet and social media's impact suggest that proactive measures are needed to avoid repeating past mistakes with AI.
  • 🔒 There is a pressing need for education of policymakers to understand AI technologies and their implications for governance and society.
  • 🌐 The creation of a global AI governance body is proposed to foster international cooperation and to manage the rapid pace of AI advancements.
  • 🔮 The future of AI is not just about technological advancement but also about the ethical and societal frameworks that guide its development and use.

Q & A

  • What is the main concern regarding the advancement of AI as discussed in the transcript?

    -The main concern is the potential loss of control over AI, the spread of propaganda and fake news, and the possibility of machines outsmarting humans, which could lead to significant societal and ethical challenges.

  • Who are some of the key figures calling for a moratorium on AI development?

    -Key figures calling for a moratorium include Elon Musk, along with thousands of entrepreneurs, academics, and scientists.

  • What does Professor Gary Marcus suggest as a solution for global AI governance?

    -Professor Gary Marcus suggests a global governance model similar to the International Atomic Energy Authority, where the world collaborates to set rules and develop tools to mitigate AI threats.

  • What are the potential negative impacts of AI on society according to the discussion?

    -Potential negative impacts include job displacement, the spread of misinformation, cybercrime, and the risk of AI models being used to create propaganda.

  • What does Evan Burfield think about the feasibility of a moratorium on AI development?

    -Evan Burfield believes that a moratorium is hard to enforce and might give a false sense of security. He suggests focusing on preparing society for AI implications rather than trying to halt progress.

  • What does the discussion indicate about the current state of AI understanding among policymakers?

    -The discussion indicates that many policymakers lack a deep understanding of AI and its implications, which is a significant concern as AI continues to advance rapidly.

  • What historical lesson from the internet and social media does Professor Marcus highlight?

    -Professor Marcus highlights the lesson of not waiting too long to act on new technologies, referencing the issues of privacy, polarization, and misinformation that arose from the internet and social media.

  • What does Evan Burfield foresee as the impact of AI on the 2024 political cycle in the U.S.?

    -Evan Burfield foresees AI having a profound impact on the 2024 political cycle in the U.S., including the potential for AI to influence elections through misinformation and propaganda.

  • What is the 'tsunami'比喻 mentioned by Evan Burfield referring to?

    -The 'tsunami'比喻 refers to the overwhelming and transformative impact that AI advancements are expected to have on society, economy, and various aspects of life.

  • What does Professor Marcus recommend for dealing with the rapid advancements in AI?

    -Professor Marcus recommends immediate action and the establishment of a central oversight or global organization to coordinate responses to AI advancements and mitigate potential threats.

  • What is the Auto gbt mentioned in the transcript, and why is it concerning?

    -Auto gbt refers to AI systems training other AI systems, which is concerning because it represents a rapid and potentially uncontrolled advancement in AI capabilities, raising questions about the speed at which AI is evolving and the potential lack of oversight.

Outlines

00:00

🤖 AI Ethics and Global Governance

The paragraph discusses the ethical implications of AI advancements and the necessity for global governance. It mentions concerns about job automation, AI-generated propaganda, and the potential for AI to surpass human intelligence. The conversation highlights the call for a moratorium on AI development until further understanding is achieved. Elon Musk and other influential figures are noted as supporters of this pause. The discussion introduces two experts: Evan Burfield, a tech investor, and Gary Marcus, a professor at New York University, who advocate for coordinated global governance of AI, drawing parallels to international organizations like the International Atomic Energy Authority.

05:00

🌪 Preparing for the AI Tsunami

This section of the script focuses on the inevitability of AI's impact on society and the economy. Evan Burfield, based in Austin, Texas, observes that startups are actively integrating AI into their operations. He discusses both the dystopian risks and the potential for AI to enhance various sectors, including medicine and government services. The conversation touches on the challenges of regulating AI and the importance of adapting social and market policies to harness AI's benefits while mitigating its risks. There's a consensus that policymakers must be better educated about AI to prepare for its transformative effects, which could significantly impact the 2024 political cycle in the U.S.

10:01

🏛 Policymakers and AI Understanding

The paragraph emphasizes the gap in understanding AI among policymakers. It points out that while there's a growing awareness of AI's potential, there's a lack of proactive measures to prepare for its implications. The discussion suggests that policymakers are not considering the broader implications of AI, including its potential to be supercharged by other technologies like quantum computing. There's a call for the establishment of institutions that focus on AI's long-term impact and the need for a global organization to coordinate AI governance, similar to existing international entities.

15:03

🗳️ AI and Democracy: Challenges Ahead

This final paragraph of the script delves into the potential for AI to disrupt democracy through misinformation and propaganda. It raises concerns about AI's role in influencing election outcomes, drawing parallels to past instances of foreign interference. The conversation underscores the urgency for policymakers to understand AI to effectively address these challenges. It also highlights the need for education and proactive measures to harness AI's potential for positive societal impact while safeguarding against its risks.

Mindmap

Keywords

💡Artificial Intelligence

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is a central theme with discussions revolving around its development, governance, and potential risks. For example, the script mentions 'the development and training of artificial intelligence' and the need for 'Global governance for AI'.

💡Moratorium

A moratorium is a temporary ban or suspension of a particular activity. In the context of the video, a moratorium on AI development is proposed to better understand the implications and potential dangers before progressing further. Elon Musk is mentioned as one of the proponents of halting AI development for at least six months.

💡Global governance

Global governance refers to the management of public affairs at the global level, involving multiple countries and organizations. The script discusses the need for a global approach to AI governance to prevent a fragmented and potentially ineffective regulatory landscape. An example from the script is the call for a system 'modeled on something like the international atomic energy Authority'.

💡Misinformation

Misinformation is false or inaccurate information, often spread unintentionally. In the video, the panelists express concern about AI being used to flood the internet with propaganda and fake news, which can lead to misinformation. This is tied to the broader conversation about the need for guardrails in AI to prevent such misuse.

💡Cyber warfare

Cyber warfare involves the use of technology and the internet to disrupt or attack another nation's critical infrastructure. The script mentions the need for new tools to combat threats like cyber warfare, illustrating the broader security concerns associated with AI and its potential misuse.

💡Competitive Advantage

Competitive advantage refers to the unique attributes that allow a company or nation to outperform its rivals. The script discusses concerns that a moratorium on AI development could lead to a loss of competitive advantage, as companies and countries might not want to halt their progress in this field.

💡Quantum Computing

Quantum computing is a type of computation that uses quantum bits to perform operations on data. In the video, it is mentioned as a technology that could supercharge AI, providing machines with human-like emotions. This highlights the potential for future advancements in AI to be driven by breakthroughs in other fields.

💡Policymakers

Policymakers are individuals or groups that make and enforce policies, often within a government context. The script discusses the challenges policymakers face in understanding and regulating AI, emphasizing the need for education and proactive measures to prepare for the implications of AI.

💡Election Security

Election security involves measures taken to protect the integrity of electoral processes from tampering or interference. The video highlights concerns that AI could be used to spread misinformation and propaganda, potentially affecting election outcomes, making election security a critical issue in the context of AI development.

💡Terminator Scenario

The 'Terminator Scenario' refers to a hypothetical situation where AI becomes self-aware and poses a threat to humanity, as depicted in the Terminator movie franchise. The script uses this term to illustrate the potential risks of developing AI without proper safeguards, although some panelists argue that we are far from reaching such a scenario.

💡Generative Models

Generative models are a type of machine learning model that can generate new data samples similar to the training data. In the video, generative models are discussed in the context of startups applying AI to various problems, highlighting the rapid advancements and applications of AI in creating new content.

Highlights

Discussion on the potential risks of AI and automation, including job displacement and the spread of propaganda.

Call for a moratorium on AI development until we better understand its implications.

Elon Musk and thousands of others signed an open letter advocating for a pause in AI training.

The need for global governance of AI to prevent a patchwork of regulations.

Proposal for an international AI authority similar to the International Atomic Energy Authority.

Concerns about the difficulty of enforcing a moratorium on AI development.

The importance of preparing society for the implications of AI.

Discussion on the potential for AI to cause a 'tsunami' of upheaval in the next five years.

The current state of AI is already profound and will change how we live, work, and engage with each other.

The impact of AI on democracies and the 2024 political cycle.

The challenge of educating policymakers about AI and its implications.

The potential for AI to be supercharged by other technologies like Quantum Computing.

The UK government's decision not to have a dedicated AI regulator and the implications of this decision.

The need for a central oversight body for AI in the United States and potentially a G7 meeting of AI ministers.

The rapid pace of AI development and the challenges it poses to regulation and governance.

The potential for AI to interfere in democracies and elections, and the need for education and preparation.

Transcripts

play00:00

should we automate away all the jobs

play00:03

including the fulfilling ones should we

play00:04

allow AI machines to flood the internet

play00:06

with propaganda and fake news should we

play00:09

develop non-human Minds smarter than our

play00:12

own machines that might one day

play00:13

outnumber us or outsmart us do we risk

play00:17

losing control now you might think that

play00:20

sounds like some futuristic script from

play00:22

a Terminator movie but last month some

play00:24

of the most well-known figures who are

play00:26

involved in the development and training

play00:28

of artificial intelligence call for a

play00:30

moratorium until we better understand

play00:32

where we're going an open letter was

play00:35

signed by thousands of entrepreneurs

play00:36

academics and scientists including Elon

play00:39

Musk who wants the training of

play00:40

intelligence haltered for at least six

play00:43

months we're going to dig deep into this

play00:45

over the next 20 minutes or so in the

play00:47

company of two people who know a thing

play00:49

or two about it joining me is the tech

play00:52

investor even burfield the Evan burfield

play00:55

the author of regulatory hacking A

play00:58

playbook for startups he's in Texas I'm

play01:00

Professor Gary Marcus is in Vancouver

play01:03

he's the professor emeritus at New York

play01:05

University and author of rebooting AI

play01:08

Professor let me start with you

play01:11

um clearly with such advances that we're

play01:14

seeing we have to set some guard rails

play01:16

who do you think should be in charge of

play01:18

that

play01:20

uh were we both professors I think that

play01:22

we need Global governance for AI I think

play01:25

that we have a lot of patchworks right

play01:28

now almost balkanized

play01:29

um the worst case from the company's

play01:31

perspective and the world's perspective

play01:32

is if there's 193 jurisdictions each

play01:35

deciding their own rules requiring their

play01:37

own training of these models

play01:39

um each run by governments that don't

play01:41

have much specific expertise in AI so

play01:44

what I called for an economist editorial

play01:46

earlier this weekend in a TED talk

play01:49

earlier this week was to have a global

play01:51

system modeled on something like the

play01:52

international atomic energy Authority

play01:54

where the world comes together and says

play01:56

we have a new threat here but it's

play01:58

really a new set of threats and we need

play02:00

to work together on this so I think the

play02:02

number one thing is it should be Global

play02:03

and the number two thing is it can't be

play02:05

just

play02:06

um policy but it also has to be a

play02:07

research side because we need to invent

play02:10

new tools like we had to invent for

play02:12

fighting spam and cyber warfare and so

play02:14

forth there's so many different threats

play02:16

as you mentioned around misinformation

play02:17

cyber crime and so forth so we need to

play02:19

have a kind of standing organization

play02:22

This Global and well-financed to try to

play02:24

build tools to mitigate those threats so

play02:26

Evan burfield there are many many people

play02:29

who who just want to press the pause

play02:31

button until we work out some of these

play02:33

things but I can already see and I've

play02:35

heard uh the reasons why that probably

play02:38

isn't possible and and that is because

play02:40

not everybody will stop and people are

play02:42

worried about losing competitive

play02:44

Advantage so how do we best go about

play02:47

this

play02:49

yeah I think that's exactly right

play02:51

um you know the the challenge with a

play02:53

moratorium is that it's incredibly hard

play02:55

to enforce

play02:56

um the responsible actors would be more

play02:58

likely to follow it the irresponsible

play03:00

actors wouldn't but that's actually not

play03:02

so much my concern with the moratorium

play03:04

there's absolutely questions we need to

play03:07

be asking about the governance of AI

play03:09

about what industry can do what

play03:11

government can do uh I think the letter

play03:13

did spark a conversation Schumer is

play03:15

working on a new AI bill here in the U.S

play03:17

rumors McCarthy's working on a

play03:19

republican version

play03:20

but what I think is actually much more

play03:23

important is to start to have the

play03:26

conversations about how we prepare our

play03:28

society our economy uh our political

play03:31

system democracy itself for all of the

play03:35

implications of AI that are coming one

play03:37

way or another and I suspect we'll see a

play03:40

year from now we'll go this was less

play03:42

impactful than we thought five years

play03:44

from now it will be an absolute tsunami

play03:47

of upheaval and we have this window

play03:49

right now where we can have this

play03:50

conversation and we can get creative and

play03:52

I think we've got to use it and a

play03:55

moratorium gives us this false sense of

play03:56

security that we have control and can

play03:58

stop it versus figuring out how we ride

play04:00

this tsunami and try to direct it in a

play04:02

much better Direction Professor Marcus

play04:04

did we

play04:06

did we learn anything from the the last

play04:10

technological Advance the advance of the

play04:12

internet of social media are there

play04:15

lessons from that which we let's face it

play04:17

we didn't do very well that are

play04:19

applicable here

play04:20

I think the number one lesson is you

play04:22

don't want to close the Barn Door after

play04:24

the horse has left I think you know

play04:26

we're very late in in figuring out what

play04:29

to do about social media I think we

play04:31

probably handled privacy alone in the

play04:33

wrong way

play04:34

um we wound up with so much polarization

play04:36

and hostility we wound up with

play04:38

misinformation

play04:40

um I think we waited too long to act I

play04:42

think the number one lesson is we should

play04:44

get on it right now and I agree with the

play04:47

other panelists that the moratorium You

play04:49

could argue about the merits whether it

play04:50

was the right thing or the wrong thing

play04:52

it was absolutely the right thing to

play04:54

raise this and get it on everybody's

play04:55

agenda this is not something we want six

play04:57

months from now something we need now so

play05:00

Evan burfield when you talk about a

play05:02

tsunami in five years time what does

play05:04

what does that look like

play05:06

uh look I'm down here in Austin Texas uh

play05:09

at Capitol Factory startup accelerate

play05:11

I've spent the whole day uh you know

play05:13

meeting with startups and there's not a

play05:15

startup right now out there that is not

play05:16

applying these AI generative models

play05:19

these large language models to every

play05:21

interesting problem of the Sun and

play05:23

there's all of the the scary dystopian

play05:26

possibilities that you uh led into this

play05:29

segment with but there's also uh

play05:31

incredible advances in how to make work

play05:33

more fulfilling and more impactful how

play05:35

to apply uh tremendous personalization

play05:38

to Medicine based on our genetics our

play05:40

environment the particular issues we're

play05:42

having how do you make government more

play05:44

responsive and feel more like a

play05:45

concierge to Citizens all of that is

play05:47

also being worked on

play05:49

um and I think figuring out how we put

play05:51

the guard rails in place around some of

play05:53

the scarier things which isn't just

play05:55

about regulating Aya it's about changing

play05:57

our social policy changing our Market

play05:58

policies themselves

play06:00

so that we can mitigate some of that and

play06:02

and direct this into the the much more

play06:05

hopeful and optimistic Direction why why

play06:07

on that point though Evan is it is it

play06:10

imaginable in the current scenario you

play06:12

are around it all the time that a

play06:14

research lab would cross a critical line

play06:16

here without even noticing

play06:18

I I'm I'm personally skeptic you know uh

play06:21

Gary's written some wonderful points

play06:24

about the fact that we are we are very

play06:26

very far I believe from artificial

play06:28

general intelligence and the Terminator

play06:30

scenarios I I think we've got to be very

play06:32

aware of right now is simply that this

play06:34

technology is already right now today at

play06:37

a state if it did not Advance any

play06:38

further where its application is going

play06:41

to profoundly change how we live our

play06:44

lives how we work how we engage with

play06:47

each other in communities how our

play06:49

democracies function uh the impacts on

play06:51

our democracies are going to be felt

play06:52

right in the 2024 political cycle here

play06:54

in the U.S that's what I think we need

play06:56

to be talking about and preparing for

play06:58

the scenarios of AI is like nuclear

play07:01

weapons we have to ban it immediately I

play07:03

think are much less applicable to the

play07:05

the much more realistic uh changes that

play07:08

are already happening around us right

play07:09

now that are going to accelerate miles

play07:12

um you've just come back from Washington

play07:13

and I know that you've been talking to

play07:15

policymakers about the specific issues

play07:17

in fact the reason we're talking about

play07:18

it tonight is because you tweeted no one

play07:21

has a clue I I mean is that is it as

play07:23

blunt as that that nobody really

play07:24

understands it there is practically no

play07:26

work being done on it

play07:28

hi Christian you're you're spot on the

play07:31

three biggest challenges right now with

play07:33

policy makers are one this was

play07:35

completely foreseeable there were some

play07:37

of us in Washington talking about this

play07:39

10 or 15 years ago policymakers weren't

play07:41

paying attention and most of the think

play07:43

tanks in Washington really failed to

play07:46

start a conversation about the Practical

play07:47

things that needed to be done to prepare

play07:49

for the age of AI so we're behind the

play07:51

ball from a policy making standpoint the

play07:54

second thing I would emphasize what Evan

play07:56

burfield just said there is a wave

play07:58

coming and you can do two things when a

play07:59

wave is coming you can get crushed by it

play08:01

or you can ride the wave and to use

play08:04

another analogy right now the discussion

play08:06

in Washington is about whether to put

play08:08

the genie back in the bottle or not that

play08:10

shouldn't be the discussion it should be

play08:11

what three wishes should we ask the

play08:13

genie and that's the discussion that

play08:15

should be had about how to handle Ai and

play08:17

use it for good purposes and finally the

play08:20

other problem is policy makers are not

play08:22

thinking two steps forward on the

play08:24

chessboard it's AI right now but in in

play08:27

this decade AI is going to be

play08:28

supercharged by other Technologies like

play08:30

Quantum Computing that are going to give

play08:32

machines genuine human-like emotion what

play08:35

are we doing to prepare for that we

play08:37

should be having that conversation now

play08:39

there needs to be institutions in

play08:40

Washington that focus on that so Jack

play08:44

we've had a discussion about that in

play08:46

this country and the UK government has

play08:48

decided that it doesn't need a dedicated

play08:50

UK regulator for AI so who's overseeing

play08:55

it

play08:56

that's a very good question I mean uh I

play08:59

I was on stage last night with the

play09:01

chancellor Jeremy Hunt and I asked him

play09:03

about this you know he's the guy in

play09:04

charge of the UK economy

play09:06

um and he was really quite dismissive

play09:08

he's he in in the sense that he said you

play09:11

know this is something that is going to

play09:12

happen and we have always embraced new

play09:14

technologies in this country and we

play09:16

should do so again uh it's full steam

play09:18

ahead was the phrase that he used

play09:21

um you know he was very very positive he

play09:24

did he did not want to talk about the

play09:25

possibility that people would lose their

play09:27

jobs because of this technology he only

play09:28

saw it as a purely positive thing and he

play09:30

was not keen to talk about the way it

play09:32

should be regulated now you know I'm no

play09:34

expert on this still if I'm a politics

play09:36

guy but what I do know is that is how

play09:39

Westminster works and how um political

play09:41

systems work and I can tell you now and

play09:42

you'll know this Christian there is no

play09:45

way our political system is set up to

play09:47

deal with this challenge absolutely no

play09:49

chance the speed at which decisions are

play09:51

made in Westminster and I suspect in

play09:54

other major political centers is far too

play09:57

slow to cope with the pace at which this

play09:59

technology is coming the policy makers

play10:01

do not understand it at all this is just

play10:03

something that is going to wash over us

play10:05

and we're going to have to cross our

play10:06

fingers you know the UK government put

play10:08

out a white paper which is what they

play10:10

call their draft strategy on AI the

play10:12

other day I mean just the very name of

play10:13

it white paper tells you how old school

play10:15

this is yeah you know it's out of date

play10:17

already and that thing took you know

play10:19

years for them to put together

play10:21

um we just don't have the sort of nimble

play10:24

small system smart thinking people set

play10:28

up to deal with this and I'd be very

play10:30

surprised if that's different in the US

play10:33

or indeed in many of the other big Power

play10:34

centers well they clearly don't

play10:36

understand it Professor Marcus do they

play10:38

call you in to try and get you to

play10:40

explain it to them

play10:42

that I was talking to people in the U.S

play10:45

and Canadian government yesterday I've

play10:46

been called a lot lately

play10:48

um I think there is an awareness that

play10:50

people don't quite know what to do and

play10:51

they are increasingly turning to me

play10:54

um and also turning to all of my you

play10:56

know academic colleagues and so forth so

play10:58

I think that there's at least a

play11:01

recognition and people know what they

play11:03

don't know I do think that the UK white

play11:06

paper saying that you won't have a

play11:07

central office of AI is certainly for

play11:10

all the reasons that were kind of

play11:12

implicitly just said which is

play11:14

um the government is going to be

play11:16

ill-equipped to deal with the speed of

play11:18

this and if you just leave it to 20

play11:20

different Regulatory Agencies Each of

play11:22

which don't have expertise you're asking

play11:24

for trouble you're asking for a lack of

play11:26

coordination and it's just not realistic

play11:28

that all of those agencies are going to

play11:30

be up on things so there needs to be I

play11:31

think at least some Central oversight I

play11:33

think the United States should consider

play11:35

a cabinet level AI officer and you

play11:37

should consider something comparable

play11:39

um you need some people maybe like a G7

play11:42

then we have a G7 meeting of foreign

play11:44

ministers we need a G7 meeting of of

play11:47

IAAI ministers is that is that

play11:49

effectively what you're saying well I

play11:52

mean I'm calling for something similar

play11:53

which is a global organization uh kind

play11:56

of like the IMF or an international

play11:58

atomic energy agency

play12:00

um where you have a lot of experts you

play12:02

have a lot of people in government you

play12:04

have a lot of people in uh the companies

play12:06

and yeah you have regular meetings

play12:07

you're like well this week the new thing

play12:09

is this thing called this is a real

play12:11

example called Auto gbt where you have

play12:13

ai's training other AIS what do we do

play12:15

about that how big a threat is it is it

play12:17

a small threat big threat like if you

play12:19

have a research arm then you can say

play12:21

let's do some experiments here and try

play12:23

to figure out what the limits are right

play12:25

now instead you have like 193 countries

play12:27

maybe some of them have read the news

play12:28

about this major news Discovery some of

play12:30

them haven't even aren't even aware of

play12:32

it and there's like no coordination here

play12:34

that just can't be the right way you're

play12:35

nodding Evan because this is the key

play12:37

issue it's miles as miles discussed it's

play12:40

not human competitive intelligence it's

play12:42

it's what happens after AI gets smarter

play12:45

than humor intelligence right amazing

play12:46

but you know I I can't go to a

play12:48

conference I I actually live in

play12:50

Washington DC most the time I can't go

play12:52

to a dinner or a conference a meeting

play12:54

without the word I being discussed and

play12:56

they're all talking about chat GPT and

play12:57

Gary's right is that even Auto gppt

play13:00

there was a an experiment run last week

play13:02

called chaos GPT where they took a

play13:04

neutered version of Auto GPT and told it

play13:07

to go out and figure out the most

play13:08

efficient way to destroy Humanity it was

play13:10

a it was sort of a test and it's set to

play13:12

work doing it there's there's a lot of

play13:14

this stuff is moving incredibly fast and

play13:17

figuring out how you can educate policy

play13:20

makers about how to mitigate regulate

play13:23

bring transparency to some of those

play13:25

threats while not preventing what can be

play13:28

breathtaking advances in

play13:31

um how we live our lives in much more

play13:33

fulfilling and purposeful ways and

play13:34

Society I think that's that's a lot of

play13:36

the trick here to Echo Jack's point

play13:38

though about white papers and the way

play13:40

you know government moves I I tend to

play13:42

agree you know miles may be more

play13:44

optimistic than I am but I tend to agree

play13:45

I think a lot of the big changes that

play13:47

are going to need to happen probably

play13:48

won't happen until uh there's some sort

play13:52

of provoking event some sort of Crisis I

play13:54

don't think that though prevents us from

play13:56

starting to have the conversations at

play13:58

least the Way Washington tends to work

play14:00

at least you want to have the the policy

play14:03

container the framework the ideas ready

play14:06

some sort of consensus being built so

play14:08

that when the opportunity presents

play14:09

itself kind of like a a VC who sees a

play14:12

great startup right when the opportunity

play14:13

presents itself you're ready to jump on

play14:15

it you're ready to move forward and I

play14:16

think that that has to be happening

play14:18

right now

play14:20

go ahead

play14:22

the opportunity that I see right now is

play14:24

to build some Global governance I think

play14:26

you have the governments are afraid of

play14:28

the technology companies the technology

play14:29

companies are afraid the governments are

play14:31

going to shut them down as they did in

play14:32

Italy and this means everybody has some

play14:34

incentive to go to the table that's rare

play14:37

and I think we should be seizing that

play14:38

opportunity right now to try to do

play14:40

something coherent that is dynamic

play14:42

enough to cope with the speed of the

play14:44

change to take advantage of the good

play14:46

things and and to avoid the bad things

play14:48

but we need that coordination now and we

play14:50

can't just leave this to the usual

play14:52

mechanisms it's just too slow miles one

play14:54

of the more worrying things that you

play14:55

said was that speaking McCarthy was

play14:58

looking at an AI for for republicans and

play15:01

you know one of the experiences we have

play15:02

of recent years is that the Russians

play15:05

were able to interfere in a democracy

play15:07

and who knows arguably it's been debated

play15:10

whether they were able to change some of

play15:11

the results through what they were

play15:12

putting onto the internet I mean we're

play15:15

into a whole new ball game for democracy

play15:17

if AI can put out misinformation and

play15:20

propaganda

play15:22

it's not if it's how much and when it's

play15:24

going to happen probably in West 2024

play15:27

election wow yeah there's no question I

play15:29

mean this this coming election cycle in

play15:32

the United States it's a big concern for

play15:34

election security authorities it should

play15:36

be but I yeah I got to go back to what

play15:38

the other panelists said in order to

play15:40

respond to it effectively we've got to

play15:42

start with education and right now I

play15:44

mean I've tried to brief policy makers

play15:46

on this it's like explaining particle

play15:48

physics to a chocolate chip cookie I

play15:50

mean there's just not recognition about

play15:53

what's happening if I was President Joe

play15:54

Biden right now I will put I would put

play15:56

the entire cabinet on Air Force One I

play15:58

would fly them to Silicon Valley and we

play16:00

would spend the week educate educating

play16:02

them about what's happening because

play16:03

there aren't just these security

play16:05

implications for the elections as Evan

play16:07

notes there's also really positive

play16:09

implications I mean there's the ability

play16:11

to address major health care problems

play16:13

hunger homelessness and to do it in real

play16:16

time and we are missing some

play16:18

opportunities by policy makers not being

play16:21

educated on the subject but of course

play16:23

security has to come first and in order

play16:25

to protect elections or anything else

play16:27

it's got to start with you know policy

play16:28

makers becoming technologists being

play16:31

educated

play16:32

fascinating conversation we're gonna

play16:34

have to leave it there Evan burfield

play16:35

Gary Marcus thank you very much indeed

play16:37

for joining us

Rate This

5.0 / 5 (0 votes)

関連タグ
Artificial IntelligenceJob AutomationMisinformationGlobal GovernanceElon MuskAI RegulationCyber WarfareTech InvestorAI EthicsPropaganda
英語で要約が必要ですか?