The AI series with Maria Ressa: An introduction | Studio B: Unscripted

Al Jazeera English
5 Feb 202425:15

Summary

TLDRJournalist Maria Ressa interviews AI expert Mike Widge on the transformative impact of artificial intelligence. They discuss AI's rapid growth, potential risks including misinformation and cyber threats, and the importance of guardrails. Widge highlights AI's role in exacerbating societal issues and the need for regulation to prevent misuse, emphasizing the urgency of addressing AI's societal effects.

Takeaways

  • 🌐 Artificial Intelligence (AI) is transforming various aspects of human life, from work to warfare, with both opportunities and risks.
  • 📈 The progress in AI accelerated significantly after 2022, driven by advancements in machine learning and the availability of big data.
  • 💾 AI relies heavily on data; social media posts, for example, contribute to training AI models by providing labeled data.
  • 🔍 AlexNet in 2012 marked a turning point in AI capabilities, demonstrating a leap in image analysis and interpretation.
  • 🧠 Large language models like ChatGPT function on a principle similar to smartphone autocomplete, but at an unprecedented scale.
  • 🌐 The training data for AI models is vast, with the entire worldwide web as a starting point, including only 3% from Wikipedia.
  • 🛠️ AI's potential dystopian outcomes range from job displacement to existential threats, though current consensus suggests AI will augment rather than replace human jobs.
  • 🌿 There's optimism that AI could help solve major global issues like climate change, with applications in fields such as synthetic biology.
  • 🚩 The misuse of AI poses significant risks, including cyber-attacks and information warfare, which are more pressing concerns than sci-fi scenarios.
  • 🔒 Transparency and guardrails are crucial for responsible AI development, to prevent unintended consequences and misuse.
  • 🗳️ AI's impact on democracy and electoral integrity is a pressing concern, with the potential to industrialize disinformation on a massive scale.

Q & A

  • What is the significance of the year 2022 in the context of AI development?

    -The year 2022 is significant because it marked the launch of Chat GPT, which led to exponential growth in AI capabilities and interest from big tech companies.

  • What is the role of data in training AI systems?

    -Data is essential for training AI systems. It is used to train neural networks, with social media uploads often serving as training data.

  • What was the impact of AlexNet on AI development?

    -AlexNet was a pivotal AI program that demonstrated a significant leap in image analysis capabilities, marking the beginning of a new era in AI.

  • How does a large language model like Chat GPT function?

    -Large language models like Chat GPT function by predicting the most likely completion of a text input, similar to smartphone autocomplete features, but on a much larger scale.

  • What is the scale of data used to train large language models?

    -The scale of data used is immense, starting with downloading the entire worldwide web, with Wikipedia constituting only 3% of the training data.

  • What is the best-case scenario for AI according to the discussion?

    -The best-case scenario for AI is that it becomes a tool used by most people in their jobs, enhancing productivity without replacing human roles.

  • What are the existential risks associated with AI?

    -Existential risks associated with AI include the potential for AI to become so powerful that it could lead to the end of humanity if it can self-improve without human supervision.

  • How can AI impact democracy and societal structures?

    -AI can impact democracy and societal structures by enabling the spread of misinformation and manipulation, influencing public opinion and potentially undermining electoral integrity.

  • What are guard rails in the context of AI?

    -Guard rails in AI refer to the safety measures and protocols implemented to prevent AI from generating inappropriate content or being used maliciously.

  • Why is transparency in AI development important?

    -Transparency in AI development is important to understand the training data used and to ensure that AI is not inadvertently promoting harmful biases or behaviors.

  • How can individuals protect themselves from the negative impacts of AI?

    -Individuals can protect themselves by being aware of AI's potential to manipulate, seeking information from trusted sources, and understanding how their data is used.

  • What is the future of jobs in relation to AI?

    -AI is expected to change the nature of work, with some tasks being automated, but it is unlikely to replace all human jobs, especially those requiring creativity and empathy.

Outlines

00:00

🌐 Introduction to AI's Impact

The paragraph introduces the transformative power of artificial intelligence (AI) and its profound potential to change human history. It mentions AI's influence on various aspects of life, from work to warfare to societal structures. The speaker, Maria Ressa, a journalist and Nobel Peace Prize laureate, discusses the dual nature of AI as both an opportunity and a risk, highlighting her personal experience with online harassment and the role of social media in spreading misinformation. She emphasizes the importance of understanding AI, setting the stage for a discussion with Professor Mike Wridge, an AI researcher with over 35 years of experience.

05:01

📈 AI's Evolution and Data Dependence

This section delves into the history and development of AI, noting that despite its seemingly recent surge in prominence, AI has been an area of study since the 1950s. The slow progress in AI is attributed to the lack of computational power and data until this century. The advent of big data and increased computing capabilities have been pivotal in propelling AI forward. The discussion highlights how social media contributes to AI training by providing vast amounts of data through user interactions and content sharing. The paragraph also introduces AlexNet, a pivotal AI program in image analysis that marked a significant leap in AI capabilities.

10:02

🧠 Understanding Large Language Models

The conversation explains the workings of large language models like Chat GPT by drawing an analogy with smartphone autocomplete features. These models are trained on extensive datasets, including the entire worldwide web, to predict and generate text based on patterns learned from the data. The scale of data used is immense, with Wikipedia constituting only a small fraction of the total training data. The potential applications of AI are broad, ranging from solving complex problems like climate change to concerns about AI's role in replacing human jobs, which the speaker refutes as unlikely.

15:04

🚀 AI's Role in Society and Misinformation

This paragraph addresses the dichotomy in public perception of AI, swinging between utopian and dystopian views. It emphasizes the need for a balanced understanding and identifies realistic concerns such as AI's potential to enable malicious activities and cybersecurity threats. The discussion also touches on AI in warfare, the lack of regulations, and the ethical implications of AI's unchecked advancement. The potential for AI to be used in information warfare, influencing public sentiment and electoral outcomes, is highlighted as a pressing issue.

20:04

🛡️ Guardrails and AI's Unintended Consequences

The dialogue focuses on the importance of implementing 'guardrails' or safety measures in AI to prevent unintended harmful consequences. It acknowledges the current measures as inadequate and the need for transparency in AI development. The conversation raises concerns about AI being developed by a few entities with potential biases in training data, affecting society negatively. The rapid evolution of AI and its impact on democracy, mental health, and the potential for personalized misinformation campaigns are also discussed.

25:06

🏛️ AI in Warfare and the Future of Work

This section discusses the international community's stance on lethal autonomous weapons and the ethical concerns surrounding AI in warfare. It also addresses the use of AI in recruitment and the potential for AI to replace human decision-making in the workplace. The speaker advocates for human involvement in critical decisions and expresses concern over the dehumanization of jobs through AI. The paragraph concludes with a contemplation on the future of jobs and how AI might alter traditional work roles.

🌟 Final Thoughts on AI

In the concluding paragraph, the discussion is summarized with a call for caution and responsibility in the development and use of AI. The speaker emphasizes the need to balance technological advancement with ethical considerations and societal impact. There's an acknowledgment of AI's potential to bring about significant changes in how future generations live and work, while also stressing the importance of maintaining human values and relationships amidst technological progress.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is discussed as a transformative technology affecting various aspects of life, from work to warfare. The script mentions AI's potential to bring profound changes and its role in social media, highlighting both opportunities and risks.

💡Existential Risk

Existential risk is the risk of an event causing the extinction of humanity or a significant fraction thereof. The video discusses AI as a potential existential risk, particularly if it becomes uncontrollable or misused, leading to scenarios that could threaten human existence.

💡Disinformation Campaign

A disinformation campaign is a deliberate effort to spread false information to influence public opinion or obscure the truth. The script mentions Maria Ressa's experience with such a campaign, illustrating how AI can be weaponized to spread lies and hate, undermining democracy.

💡Social Media

Social media refers to digital platforms such as Facebook, Twitter, and Instagram, where users create and share content. The video discusses how social media platforms prioritize the spread of provocative content, which can be exploited by AI to disseminate disinformation.

💡Large Language Models

Large language models are AI systems trained on vast amounts of text data to generate human-like text. The video explains how models like Chat GPT work by predicting text based on patterns learned from data, raising concerns about the quality and impact of the information they generate.

💡Data

Data refers to the information, often in digital form, that AI systems use to learn and make predictions. The video emphasizes the importance of data in training AI, noting that activities like social media posting contribute to the data that AI systems use.

💡Neural Networks

Neural networks are computing systems inspired by the human brain designed to recognize patterns. The video mentions that AI requires large neural networks, which are built with significant computational power and data, to function effectively.

💡AlexNet

AlexNet is a significant AI program that marked a leap in image recognition capabilities. The video uses AlexNet as an example of a milestone in AI development, illustrating the rapid advancements in the field post-2012.

💡The Singularity

The Singularity refers to a hypothetical point in the future when AI becomes smarter than humans and capable of self-improvement. The video discusses this concept as a potential existential risk, where AI could surpass human intelligence and control.

💡Cybersecurity

Cybersecurity is the practice of protecting systems, networks, and data from digital attacks. The video mentions cybersecurity as a significant risk amplified by AI, where bad actors could use AI to launch sophisticated attacks.

💡Electoral Integrity

Electoral integrity refers to the fairness and accuracy of elections. The video discusses how AI could be used to disrupt electoral integrity through the spread of misinformation, influencing public opinion and election outcomes.

Highlights

Artificial intelligence is transforming our world and expected to bring profound changes.

AI's impact on work, warfare, and societal structures is significant.

AI presents both huge opportunities and risks, including potential existential threats.

Journalist Maria Ressa discusses the dangers of AI and its threat to democracy.

Social media design prioritizes the spread of lies and anger, exacerbated by AI.

AI has been studied since the 1950s, but progress accelerated in the 21st century with increased computing power and data.

Data is crucial for training AI, with social media providing a wealth of training data.

AlexNet in 2012 marked a significant leap in AI's image analysis capabilities.

Large language models like ChatGPT work on a principle similar to smartphone autocomplete.

AI training involves downloading the entire worldwide web, with Wikipedia constituting only 3% of the data.

AI's potential to solve major problems like climate change is discussed.

AI is unlikely to replace human jobs entirely, but will become a tool used in jobs.

The concept of 'The Singularity', where AI surpasses human intelligence, is explored.

Current AI risks include cyber attacks and AI weapons of war, rather than existential threats.

The importance of guard rails and checks in AI to prevent the spread of inappropriate content.

AI's impact on society is a concern, with the technology evolving faster than government regulation.

The potential for AI to be used in disinformation campaigns, impacting electoral integrity.

AI's role in the job market and the concern about it filtering CVs and introducing biases.

The future of AI in warfare, with discussions on lethal autonomous weapons.

AI's potential to change administrative jobs, making human involvement seem strange in the future.

Transcripts

play00:00

[Music]

play00:05

artificial intelligence AI is already

play00:08

Transforming Our World and is expected

play00:10

to bring some of the most profound

play00:12

changes in human history think of me as

play00:15

a friendly companion who can provide

play00:17

helpful insights from the way we work to

play00:20

the way Wars are fought to the very

play00:22

fabric of our societies it seems set to

play00:25

bring huge opportunities but others warn

play00:28

it could lead to our own destruction

play00:30

it's one of the existential risks that

play00:32

uh we facing and potentially the most

play00:34

pressing one my name is Maria oressa and

play00:37

I'm a journalist from the Philippines

play00:39

through our investigations I became the

play00:41

target of a harassment and

play00:42

disinformation campaign receiving

play00:45

thousands of death threats online I

play00:47

received the Nobel Peace Prize in 2021

play00:50

an acknowledgement of how difficult it

play00:52

is for journalists to do our jobs today

play00:56

I saw firsthand the dangers of tech and

play00:59

its threat to to democracy the design of

play01:02

the systems of social media prioritizes

play01:05

the spread of Lies laced with anger and

play01:08

hate in this special series of Studio B

play01:12

on artificial intelligence I'll be

play01:13

meeting some of the brightest Minds

play01:15

working in the field today my guest this

play01:18

week is Professor Mike wridge he's been

play01:20

working in AI research for over 35 years

play01:23

in Oxford and at the prestigious Allan

play01:25

touring Institute a prolific author he's

play01:28

written nine books and over 4 400

play01:30

scientific articles on the subject so

play01:34

what exactly is artificial intelligence

play01:37

how did we get here and is it really a

play01:41

threat to our very

play01:44

[Music]

play01:58

existence

play02:02

Mike it is so good to see you and you

play02:04

know you have been studying artificial

play02:07

intelligence for 35 years but something

play02:09

changed right it grew exponential

play02:13

exponential right after November 2022

play02:16

after chat GPT was

play02:18

launched how did we get to where we are

play02:20

first how do you define where we are

play02:23

what does the science tell

play02:25

us so artificial intelligence despite

play02:29

appearances is not a new field it's been

play02:31

studied very very actively since uh

play02:34

since since the 1950s but the truth is

play02:37

that actually progress in AI was really

play02:39

glacially slow until this Century um

play02:43

computers in the past just weren't

play02:45

powerful enough and we didn't have the

play02:47

data and we are in the world of big data

play02:50

and AI is nothing without data you

play02:53

absolutely need data to to to train AI

play02:56

to use the terminology and every time

play02:58

you upload a picture of yourself to

play03:00

social media and you helpfully label it

play03:01

with your name or your kids do what they

play03:04

are doing is providing training data

play03:08

social social media companies that's

play03:09

literally what their role is in in doing

play03:12

that so you need data and you need lots

play03:14

and lots of computer power to be able to

play03:15

build neural networks that were big

play03:17

enough so around about 2012 or so Alex

play03:22

tell us about Alex net so Alex net um

play03:25

was a computer program and AI program to

play03:27

do basically image analysis and it was

play03:30

entered into a competition and entries

play03:32

in this competition were judged at how

play03:34

well they could interpret pictures in in

play03:37

images and the point about Alex net was

play03:39

that in one year we saw a step change in

play03:42

capability and this got everybody's

play03:44

attention and it it became clear at that

play03:47

point that we really were in a kind of a

play03:48

new era of AI and that was the point I

play03:51

have to say that the big tech companies

play03:53

noticed uh and started to get really

play03:56

really really interested can I ask you

play03:58

something very geeky

play04:00

uh you talked about training data about

play04:01

machine learning artificial intelligence

play04:03

neural networks large language models

play04:06

how do these all fit together okay the

play04:08

way that large language models like chat

play04:10

GPT work is really bizarrely simple it's

play04:14

just doing exactly what your smartphone

play04:16

does when you do autocomplete so if you

play04:18

open up your smartphone uh and you start

play04:21

sending a text message so for example I

play04:23

start sending a tech message to my kids

play04:25

and I type have you it will suggest

play04:27

completions and the completions might be

play04:29

tided your room or walked the dog right

play04:32

those might be the likeliest completions

play04:33

of that so how is it doing that it's

play04:36

been trained on all of the text messages

play04:38

I've sent my kids and learned that the

play04:40

likeliest completions of have you are

play04:43

either going to be walked your dog or or

play04:44

tidied your room so chat GPT is doing

play04:47

nothing more than that the difference is

play04:50

the scale right chat GPT is built with

play04:52

AI supercomputers that run for months uh

play04:56

and cost tens of millions of dollars to

play04:58

be able to do that training and the

play05:00

training data is basically it's not your

play05:03

smartphone messages it's all the Digital

play05:06

Data available in the world and the

play05:07

standard way that you build these is to

play05:09

start by downloading the whole of the

play05:11

worldwide web right the entirety of the

play05:14

worldwide web so Wikipedia makes up just

play05:17

3% of the training data for uh for these

play05:20

large language models so the scale of

play05:22

the data is incredible um there are

play05:24

people who say uh that this will solve

play05:28

humanities worst problems like climate

play05:31

change deep mind uh which is behind

play05:35

Google Search now because they bought it

play05:37

also does synthetic biology right and

play05:39

that maybe we can use phytol plantant

play05:42

that can pull carbon out of the air like

play05:44

can you give me your best case

play05:46

scenario okay so it is quite remarkable

play05:49

that the discussion around AI either

play05:51

veers to the extremely dystopian it's

play05:54

going to be the end of humanity or the

play05:55

extremely utopian and there's not

play05:57

actually a lot between those two the

play05:59

reality is going to be between those two

play06:02

the idea I mean I think Elon Musk was on

play06:04

record recently as suggesting that AI

play06:06

was going to take all our jobs that

play06:08

seems very unlikely to me not in the

play06:11

lifetime of anybody in this room AI will

play06:13

become a tool that most people use in

play06:15

their jobs but it's not going to replace

play06:17

people I mean for example there are

play06:19

going to be lots of applications of AI

play06:21

in education which is going to be really

play06:23

wonderful but what teachers do is a very

play06:26

human thing it's not going to replace

play06:29

all of humanity and allow us to spend

play06:32

our lives writing poetry or whatever it

play06:35

is that we would do if we did didn't

play06:37

have uh jobs I think so that scenario is

play06:39

is extremely unlikely the dystopian

play06:43

scenarios have been really hotly

play06:45

discussed and people talk about

play06:46

existential risk and that literally

play06:48

means the end of humanity that AI could

play06:51

become so powerful that somehow it ends

play06:54

Humanity if it can program itself if it

play06:56

can get resources it can continue doing

play06:59

it without human supervision right so so

play07:01

there's this scenario uh called The

play07:03

Singularity yes and it's a beautiful

play07:05

scenario which makes for great science

play07:07

fiction and the idea is only fiction go

play07:10

go the idea is at some point in the

play07:12

future at some point we don't know when

play07:14

AI is going to be as smart as we are and

play07:17

at that point it can start to improve

play07:18

itself it can literally rewrite its PR

play07:20

code and then at that point it's smarter

play07:22

than we are and that improved AI can

play07:25

then improve its code again itself uh

play07:27

and it just continues that process and

play07:28

the fear is at that point that AI is out

play07:30

of our control I saw this on Black

play07:33

Mirror and actually of all of the

play07:35

Contemporary science fiction shows Black

play07:37

Mirror I think is absolutely by far the

play07:39

best it's very thought-provoking stuff

play07:41

so is it so i' in all of this discussion

play07:44

I've never seen a single genuinely

play07:48

plausible scenario for existential

play07:50

threat and it really has been discussed

play07:53

endlessly with some very very very smart

play07:55

people thinking about it the biggest

play07:57

risks right now are that AI is a

play08:00

powerful tool yeah and it enables bad

play08:03

people to do bad things bad things that

play08:05

they couldn't previously have done that

play08:07

it enables a whole category of risks not

play08:10

existential risks but risks like cyber

play08:13

security attacks which would just not

play08:15

have been feasible uh previously that I

play08:18

think focusing our attention on those

play08:20

issues I think would be much more

play08:22

productive than on science fiction

play08:23

issues or AI weapons of war right like

play08:26

AI drones which they've used in Ukraine

play08:29

are being used in Moscow um there again

play08:32

are no boundaries set on this and yet

play08:36

the

play08:37

scientists with a profit motive are

play08:41

rushing ahead and we are like Pavlov's

play08:45

dogs in real time how can we protect

play08:48

ourselves in this because if you look in

play08:50

in the Nobel lecture in 2021 I actually

play08:53

said that um we had data that showed

play08:55

that we're being insidiously manipulated

play08:58

it has so much of our data

play08:59

that it cuts in through our emotions

play09:02

information Warfare changes the way we

play09:04

feel changes the way we think and then

play09:07

the way we act electoral Integrity for

play09:09

example I mean I don't I don't think

play09:11

it's a coincidence that you have now

play09:13

according to VM 72% of the world under

play09:16

authoritarian rule right so these are

play09:18

some of the impact of it Mr scientist

play09:21

tell me right because you don't have a

play09:23

profit motive right you're studying the

play09:26

science how do we rein them in

play09:29

so I think there's there's two sets of

play09:31

issues the first is we we're pretty

play09:34

confident right now that social media

play09:36

one of the unintended consequences of

play09:37

social media was a mental health crisis

play09:39

in teenagers and we didn't see that

play09:42

coming right but that's just one of the

play09:43

unintended consequences and I think what

play09:45

you're saying is what are the unintended

play09:48

consequences of AI going to be so for

play09:50

example what if we end up with some

play09:53

future large language model which just

play09:56

completely inadvertently makes us more

play09:58

aggressive

play09:59

or more depressive for example and what

play10:02

impact would that have globally for

play10:04

example a widely used AI tool that made

play10:06

us more aggressive might lead to more

play10:08

conflict in but isn't that happening

play10:10

since they took all of the Big Data the

play10:12

unstructured Big Data of social media

play10:14

full of fear anger hate right isn't that

play10:16

happening now okay as we already

play10:18

mentioned the way that this technology

play10:20

is configured is you download the whole

play10:23

of the worldwide web now you don't have

play10:24

to look very hard on the worldwide web

play10:27

to find all sorts of unpleasant and I

play10:29

mean if you go on you know some social

play10:31

media platforms they have types of

play10:33

unpleasantness that we could scarcely

play10:35

imagine right so and if all of that has

play10:38

been absorbed by a large language model

play10:40

then it's a seething cauldron of

play10:43

unpleasantness now I think genuinely you

play10:45

know responsible AI companies have no

play10:47

intention whatsoever of uh unleashing

play10:50

that on the world so what they do is

play10:51

they're building guard rails and so they

play10:54

try to intercept queries that are how do

play10:57

I build a pipe bomb they will try to

play11:00

inter such a and also they will look at

play11:02

the outputs of the large language model

play11:03

and try to intercept which is

play11:05

inadvertently com out with which is in

play11:07

appropriate at the moment those guard

play11:09

rails I think are the technological

play11:11

equivalent of gaffa tape they're just

play11:13

being you know they're being plastered

play11:15

onto these exactly there's no deep

play11:18

fixers to that and one of the worries is

play11:22

if this technology is owned by a small

play11:24

group of actors who develop this

play11:26

technology behind closed doors we don't

play11:29

to see the training data so you have no

play11:31

idea what this has been trained on about

play11:33

you and you're a public figure there

play11:34

would have been a great deal of content

play11:37

about you and some of it won't have been

play11:38

very nice that's a safe bet so this is I

play11:41

think a real concern and this issue of

play11:43

transparency I think is is is really a

play11:46

concern which needs to be taken very

play11:48

very Ser you talked about guard rails

play11:49

right there's no incentive for them to

play11:51

put guard rails in I mean they the only

play11:53

incentive is that they won't be attacked

play11:55

by people right it's a reputational

play11:58

thing but if they can get by without it

play12:00

they have as they have with social media

play12:02

we still haven't done anything well I

play12:04

think here we are in a situation uh

play12:07

which is very awkward we've got AI which

play12:09

has gone viral uh it's the first large

play12:12

language models are the first general

play12:14

purpose and I'm choosing those words

play12:16

very carefully general purpose AI tools

play12:18

that have reached a mass market and

play12:20

they're very powerful yeah and the tech

play12:22

companies see Empires they see na

play12:25

Empires and they want to they want to

play12:27

stake their claim on those Empires they

play12:29

want to be those Empires they want to be

play12:31

the Google they want to be the they want

play12:33

to be the Amazon of the generative AI

play12:35

world and the very big risk is that what

play12:38

they're doing to try to get an advantage

play12:41

on their competitors is Rush ahead with

play12:43

this technology without thinking about

play12:45

for example whether it's really fit for

play12:47

prime time and that really is a worry um

play12:50

but these worries are not you know

play12:52

they're not unknown I mean the the UK

play12:54

government convened an international AI

play12:57

safety Summit and I have to tell you

play12:59

there was a there was some skepticism

play13:00

about what it was going to achieve but

play13:02

actually the the debate was was a

play13:03

sensible debate uh and it got it on the

play13:06

international agenda so I think what's

play13:08

going to be challenging is the extent to

play13:11

which government can really hold the

play13:13

richest companies in the world to

play13:15

account and the irony of course is that

play13:18

if they get it out to all of us they get

play13:21

all of our data we train their large

play13:23

language models they gain more power

play13:26

even as the very nation state that are

play13:29

going to try to put regulations in place

play13:31

to control them lose power because the

play13:34

technology is already impacting Society

play13:37

all around the world right it's a tough

play13:39

one um I guess you know in I I have a

play13:42

bleak picture of this as you can tell

play13:44

because having been

play13:46

attacked and to hear them say you know

play13:49

well we didn't intend that doesn't

play13:51

really matter what the intent was and so

play13:54

how do we what can we do right now right

play13:57

it's moving too slowly governments move

play13:59

at the pace of years while the tech

play14:02

evolves in every two weeks Agile

play14:05

development means they're rolling out

play14:06

code every two weeks right so is there

play14:09

anything anyone watching can do well I

play14:12

think there is there is concretely

play14:13

something we can do so we're heading

play14:14

into elections in the UK and the US the

play14:16

US India one of the very prominent risks

play14:20

with this technology is the possibility

play14:22

of industrializing the production of

play14:24

disinformation and misinformation uh uh

play14:28

on a massive scale on an unprecedented

play14:30

scale and personalizing it down to the

play14:32

level of individuals so the AI can look

play14:35

at my social media feed and pick up on

play14:38

the sentiments that I express in my

play14:39

social media feed pick up on my

play14:42

political stance which is going to be

play14:43

implicit within sometimes explicit

play14:46

within my social media feed and then

play14:48

feed me personally tailored very high

play14:51

quality misinformation the sentiment

play14:54

analysis exists to do that the

play14:56

generative AI makes it possible to do

play14:58

that and uh and and the cost of

play15:01

launching a disinformation campaign in

play15:04

an election because of generative AIS

play15:06

come down massively yeah and let's be

play15:08

honest there are people in the world

play15:10

with a huge interest in disrupting

play15:12

elections in the US or the UK or India

play15:15

and so on uh and it could be people just

play15:17

with an interest in vandalizing the

play15:20

process or it could be state level

play15:21

actors that really want to disrupt

play15:24

what's going on so what can we do

play15:25

concretely I think we absolutely need to

play15:28

be alert to that issue I think trusted

play15:30

news sources are going to become so

play15:33

valuable the difficulty with that of

play15:35

course is that we end up in a world

play15:36

where you know we're all completely

play15:38

paranoid and don't believe anything but

play15:40

trusted news sources I think are going

play15:41

to be essential and understanding how we

play15:44

can be manipulated I think is really

play15:47

really important there is so much more

play15:48

we can talk about this because in 2024

play15:51

one in three people around the world are

play15:53

going to vote and this is the Tipping

play15:57

Point for both electoral systems our

play16:00

democracies um but we're getting to the

play16:02

Q&A so let me toss it to you the

play16:06

gentleman in the back was the first hand

play16:09

up leading on from what you were saying

play16:11

Michael about news and trusted news

play16:14

given the growth of generational AI

play16:16

technology which can actually do deep

play16:18

fakes pretty convincingly both in audio

play16:21

and video uh on social media feeds how

play16:24

long before we the poor public cannot

play16:26

tell the difference anymore

play16:29

well I think uh in terms of being able

play16:32

to tell the difference AI right now can

play16:35

perfectly duplicate your voice to the

play16:36

point where nobody would be able to tell

play16:38

the difference that's a technology which

play16:40

exists deep fake images uh is not quite

play16:43

there but very very close but I don't

play16:45

know if people saw do you remember

play16:47

seeing this picture of the Pope in this

play16:48

big puffer jacket that went viral and I

play16:50

have to tell you when I first saw that I

play16:52

didn't actually twig that this was not a

play16:54

a real image I just assumed it was and I

play16:56

thought this was a a slightly strange

play16:59

clothing choice for the pope um so we

play17:01

need to raise our guards for that and we

play17:03

need I say the the the issue of trusted

play17:06

news sources is just going to be so so

play17:09

so important you know they're going to

play17:10

be facing this technology and they're

play17:12

going to need to think of new ways of

play17:13

dealing with that technology but let's

play17:16

hope they can rise to the occasion I

play17:18

mean I I'll pick it up and I'm slightly

play17:20

more

play17:21

pessimistic um I promise you won't walk

play17:24

out depressed completely right um I

play17:26

think our shared reality is already

play17:28

broken the political Dominoes on social

play17:30

media of information operations the

play17:33

political dominoes fell in 2016 duterte

play17:36

was elected in the Philippines in May

play17:38

about a month later you had brexit and

play17:40

then you had all of the elections moving

play17:41

Trump was elected in November and you

play17:44

know we have the data to show that there

play17:46

were information operations that were

play17:48

there it plays to our fears our hatred I

play17:53

mean this is playing out right now and

play17:55

our shared reality is splintered right

play17:57

so what do we we have to do um news

play18:01

organizations are under attack both on

play18:04

the business model side right um the

play18:07

money that used to go to news uh we

play18:10

still have to maintain very expensive

play18:12

systems of checking everything because

play18:15

we stand behind it we're legally liable

play18:17

that now goes to microt targeting microt

play18:20

targeting is not the same as advertising

play18:22

it goes to your weakest moment to a

play18:24

message that's and it's cheap right so

play18:26

that's still there and then Mike is

play18:29

going to bring large language models we

play18:31

have it there and we've already seen

play18:32

this wow I sound really Bleak it's only

play18:35

cuz I was getting 90 hate messages per

play18:37

hour and in order to keep doing my job I

play18:40

had to be okay with going to jail for

play18:42

the rest of my life that's a lot to ask

play18:44

from your journalists so what do we do

play18:48

come out in the real world right

play18:51

understand you are being manipulated up

play18:53

until the guard rails are put in place

play18:55

we need to organize ourselves and have a

play18:59

shared reality this is a shared reality

play19:01

right now so we want to try to get more

play19:03

questions from the audience go ahead

play19:05

saadya UK campaign to stop Killer Robots

play19:08

um and my question is um in your view

play19:10

what do you think the International

play19:13

Community should be doing uh to address

play19:15

the concerns and challenges of the use

play19:18

of AI in Warfare given that we're seeing

play19:20

this being used in Ukraine and in isra

play19:23

Gaza well again we're stepping well

play19:26

outside my comfort zone I can tell you

play19:28

firstly the international AI Community

play19:31

is broadly but not universally against

play19:34

lethal aonomus weapons so in 2015 I was

play19:36

organizing a conference and we had a

play19:38

panel on exactly this topic and I

play19:41

thought the the views were going to be

play19:43

absolutely unanimously against lethal

play19:45

autonomous weapons and was really

play19:46

startled to discover that there are

play19:49

people of good faith who think that no

play19:51

that this is this would this is how my

play19:54

children can avoid having to be involved

play19:56

in warfare that's literally how some

play19:57

people viewed this so it was a it was a

play20:00

more complex issue than I thought but I

play20:02

I can tell you what my perspective is

play20:04

and my perspective is that I do not

play20:06

think it's acceptable that a machine

play20:08

decides autonomously whether to take a

play20:09

human life if a human life is taken then

play20:13

which uh you know is is uh is is an

play20:16

extremely undesirable situation in any

play20:18

case but somebody who takes that

play20:20

decision on a battlefield has to be able

play20:23

has to be capable of empathy and

play20:25

understand the consequences what it

play20:26

means for a human being to be deprived

play20:28

of their life what can we do with it

play20:30

well we've we've moved on landmines

play20:32

internationally imperfectly but that

play20:34

shows that there are ways ahead with

play20:36

this at the same time we need to be

play20:38

realistic there are nation states that

play20:40

are not remotely interested in the

play20:42

niceties of these issues and they will

play20:44

develop this technology uh in secret and

play20:47

we won't see that and so we do have an

play20:50

obligation to make sure that we can

play20:52

protect against the the attacks by that

play20:55

kind of technology I think that's the

play20:57

only realistic and respons responsible

play20:58

Way Forward but um by and large the the

play21:02

Crum of comfort I can offer you is that

play21:04

the vast majority of the AI Community

play21:06

think the technology is AB Boren and

play21:07

want nothing to do with it let's take

play21:09

one last question uh so AI has been used

play21:12

more and more in recruitment uh to stiff

play21:14

through vast amounts of CVS which for

play21:16

people of my age is a bit of a concern

play21:18

and obviously a lot of other people as

play21:20

well uh the sad thing about this is that

play21:22

a lot of people don't actually know this

play21:24

and also don't understand what language

play21:26

it picks up into text and it doesn't

play21:28

necessarily look at niches so is this a

play21:31

concern that people might miss good CVS

play21:33

and people that would actually be the

play21:35

perfect candidate because they don't

play21:36

know this and what can be done to

play21:38

eliminate this or at least reduce the

play21:39

risk in the future or the inherent

play21:42

biases in the technology actually make

play21:44

those selections I I'm completely with

play21:46

you on this one I don't think this is a

play21:48

good use of the technology I want humans

play21:50

involved in that decision this to me is

play21:52

just lazy and inappropriate HR practice

play21:55

if a HR manager says oh but the

play21:56

decisions will be fairer I think it's

play21:58

absolutely nonsense I just don't think

play22:00

that that's something that we should do

play22:01

but if you think that's not a nice use

play22:03

of the technology imagine you've got AI

play22:05

as a boss telling you what to do Moment

play22:08

by moment through your working life and

play22:11

there are companies pursuing that you

play22:13

know looking at every email that you

play22:14

send commenting on it you know wridge

play22:17

you know you only sent 20 emails today

play22:19

the company average is 22nd you took

play22:21

five bathroom breaks today the company

play22:23

average is three that kind of do you

play22:26

want to live in a world where AI is

play22:28

giving you that kind of people I don't

play22:29

and I dare say you don't either there

play22:31

are companies pursuing that nonsense now

play22:34

yeah yeah we won't name it okay we we do

play22:37

get one more question so let's end with

play22:39

the gentleman in the back he's hand up

play22:41

first I have a very simple question um

play22:43

we talk about loss of jobs due to AI um

play22:45

do we think in a hundred years time it

play22:47

will be strange to explain to a child

play22:50

that um the jobs being done in admin and

play22:52

companies would have done by humans I

play22:54

think they will find that very

play22:56

strange well the f future is going to be

play23:00

not just weirder than we imagine but

play23:01

weirder than we can imagine I have

play23:04

teenage kids who've grown up with the

play23:06

internet and they just assume that it's

play23:07

there and it's always on and when it

play23:09

doesn't work for whatever reason they're

play23:11

just perplexed and don't you know

play23:12

something's gone wrong with the world if

play23:14

the the internet doesn't work for them

play23:16

kids that are seven or eight years old

play23:18

now are going to grow up they're the

play23:19

first generation in history that's going

play23:21

to grow up being surrounded by very

play23:25

powerful general purpose AI tools like

play23:27

chat G PT and they are going to do the

play23:30

weirdest things with it and the the best

play23:32

example I can I can give you is is you

play23:35

go back to the origins of YouTube which

play23:36

is 2005 or so and you nobody really knew

play23:40

what it was at the time you could upload

play23:42

family videos and share them with family

play23:44

members or you know you could upload

play23:46

clips of your favorite TV shows nobody

play23:48

predicted YouTube influencers or the

play23:51

fact that people would not just be able

play23:52

to make a living but actually make a

play23:53

fortune by making videos of themselves

play23:56

playing computer games and talking over

play23:58

it and people do you know nobody

play24:01

predicted that in exactly the same way

play24:04

no we can't predict right now how our

play24:06

kids are going to use uh AI in the

play24:10

future but the basics of humanity are

play24:12

not going to change they didn't change

play24:14

with rock and roll they didn't change

play24:15

with television they didn't change with

play24:17

Cinema they didn't change with novels

play24:18

the fundamentals of humanity and human

play24:21

relationships are going to be the same

play24:23

but our kids are going to be creative in

play24:26

ways that we just find I say weird and

play24:30

hard to imagine but for them it's going

play24:31

to be it's going to be a

play24:33

ride he's again very optimistic it is a

play24:37

ride right but I think this is this

play24:39

moment in time uh this moment in time is

play24:42

critical we need the science um and we

play24:45

need to curtail the for-profit motive so

play24:49

that we can be safe with the technology

play24:51

absolutely Michael wrid thank you thank

play24:53

you so much for joining us in Studio B

play24:57

the AI series I'm maressa thank you for

play24:59

joining

play25:00

[Applause]

play25:05

[Music]

play25:13

us

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Artificial IntelligenceFuture TechEthical ConcernsTech ImpactDisinformationCybersecurityHuman-AI InteractionAI in WarfareMisinformation CampaignAI Regulation
Benötigen Sie eine Zusammenfassung auf Englisch?