Russia and Iran use AI to target US election | BBC News

BBC News
19 Sept 202426:33

Summary

TLDRThe transcript from 'AI Decoded' discusses the threat of generative AI in spreading disinformation, with a focus on deepfakes and their impact on democracy and elections. It covers California's new law against deepfakes in elections, watermarking as a potential solution, and the role of social media companies in regulation. Experts weigh in on the challenges of detecting deepfakes and the importance of critical media literacy. The show also features AI's role in debunking conspiracy theories through chatbots, highlighting the potential of AI in combating the spread of false information.

Takeaways

  • 📜 California has passed a bill making it illegal to create and publish deep fakes related to upcoming elections, with social media giants required to identify and remove deceptive material from next year.
  • 🐱 The script discusses the problem of AI-generated memes, like those of cats and ducks, which have fueled rumors with dangerous consequences.
  • 🏛️ Beijing is pushing for AI to be watermarked to help retain social order, placing responsibility on creators to ensure the authenticity of AI-generated content.
  • 🎤 The script mentions how AI has been used to hijack the image of celebrities like Taylor Swift, who was falsely shown endorsing a political candidate.
  • 🌐 The Microsoft Threat Analysis Center in New York City works to detect and disrupt cyber-enabled influence threats to democracies worldwide.
  • 🔍 The center has detected attempts by Russia, Iran, and China to influence the US election, with each nation using different tactics such as fake videos and websites.
  • 🤖 AI is being used to combat the spread of misinformation, with researchers developing tools to detect deep fakes and provide explanations for their authenticity.
  • 💡 The discussion highlights the need for watermarking as a potential solution to identify genuine content, but also acknowledges the challenges in keeping up with advancing technologies.
  • 🌐 There's a call for a global approach to traceability in AI-generated content, similar to supply chain management, to ensure the origin and authenticity of digital creations.
  • 🤖 The script introduces a chatbot designed to deprogram individuals who believe in conspiracy theories by engaging them in fact-based conversations.

Q & A

  • What is the significance of the bill signed by Governor Gavin Newsom in California regarding deep fakes?

    -The bill signed by Governor Gavin Newsom makes it illegal to create and publish deep fakes related to upcoming elections. Starting next year, social media giants will be required to identify and remove any deceptive material, marking California as the first state in the nation to pass such legislation.

  • How does generative AI amplify the threat of disinformation?

    -Generative AI tools, which are largely unregulated and freely available, have the potential to create convincing fake content, including deep fakes and manipulated media, which can be used to spread disinformation and undermine trust in elections and freedoms.

  • What is the role of the Microsoft Threat Analysis Center in New York City?

    -The Microsoft Threat Analysis Center, located in New York City, is a secure facility that monitors attempts by foreign governments to destabilize democracy. It detects, assesses, and disrupts cyber-enabled influence threats to democracies worldwide.

  • How do the analysts at the Microsoft Threat Analysis Center detect foreign influence attempts on US elections?

    -Analysts at the Microsoft Threat Analysis Center detect foreign influence attempts by analyzing data and reports, identifying patterns, and advising governments and private companies on digital threats. They have detected simultaneous attempts by Russia, Iran, and China to influence the US election.

  • What challenges do AI tools face in detecting deep fakes?

    -AI tools face challenges in detecting deep fakes due to the continuous advancement of generative AI technologies, which can create increasingly realistic fake content. Additionally, AI tools may struggle with images or videos that are too far away from what they have seen during training, leading to potential misclassifications.

  • What is the potential solution to the deep fake problem discussed in the script?

    -One potential solution discussed is the use of AI to detect misinformation and deep fakes. This involves training AI tools to identify inconsistencies and anomalies in content, and providing explanations for why certain content is flagged as a deep fake.

  • Why is watermarking proposed as a solution to the deep fake problem?

    -Watermarking is proposed as a solution because it can provide a form of traceability and authenticity to digital content. It would allow for the identification of original and verified content, helping to distinguish it from deep fakes.

  • How does the concept of 'situational awareness' relate to the detection of deep fakes?

    -Situational awareness in the context of deep fake detection refers to the ability to proactively monitor and analyze content on social media platforms using AI tools. This allows for the establishment of a global scale understanding of where and when disinformation is being spread.

  • What is the 'debunk bot' mentioned in the script and how does it work?

    -The 'debunk bot' is an AI chatbot designed to converse with conspiracy theorists using fact-based arguments to debunk their beliefs. It draws on a vast array of information to engage in conversations and has shown success in reducing conspiracy beliefs by an average of 20% in experimental settings.

  • How does the debunk bot approach the challenge of changing deeply held beliefs?

    -The debunk bot approaches the challenge by providing tailored information and facts directly related to the specific conspiracy theories that individuals hold. It engages in a conversation that summarizes and challenges the beliefs, using evidence to persuade users away from their conspiracy theories.

Outlines

00:00

📜 AI and the Threat of Disinformation

The segment begins with a discussion on the challenges posed by generative AI in creating and disseminating disinformation. It highlights the recent legislation in California that criminalizes the creation of deep fakes related to elections and the upcoming requirement for social media platforms to identify and remove deceptive content. The conversation also explores the global impact, including China's efforts to watermark AI content and the manipulation of public figures like Taylor Swift. The segment emphasizes the role of AI in both creating and combating disinformation, with a focus on the importance of critical media literacy and the challenges of regulating technology across borders.

05:00

🔍 Deep Fakes: Detection and Regulation Challenges

This paragraph delves into the difficulty of detecting deep fake imagery and audio, and the potential solutions such as watermarking to authenticate content. It discusses the ongoing 'whack-a-mole' game with technology, where advancements in detection are met with new methods of manipulation. The conversation includes the role of social media companies in regulating content, the challenges of enforcing regulations globally, and the importance of preparing citizens with critical media skills. It also touches on the debate over First Amendment rights and free speech in the context of regulating AI-generated content.

10:02

🕵️‍♂️ AI Tools for Detecting Deep Fakes

The focus of this section is on the development of AI tools to detect deep fakes. It features a discussion with Dr. Christian Schroedl from the University of Oxford, who is researching methods to identify deep fakes using AI. The conversation includes the use of AI to track down AI-generated misinformation, the limitations of current detection tools, and the need for further research. Examples of deep fake images, such as the Pope in a puffer jacket, are used to illustrate the challenges in detection and the potential of AI in providing explanations for why certain images are deemed deep fakes.

15:04

🌐 Social Media and the Spread of Disinformation

This segment discusses the role of social media in the spread of disinformation and the vested interest companies have in maintaining a reliable information ecosystem. It highlights the potential of AI to not only create but also combat deep fakes and misinformation. The conversation includes the potential of context-aware AI to determine the authenticity of content and the importance of situational awareness on a global scale. The segment also touches on the challenges of disinformation spread by conspiracy theories and the difficulty of changing deeply held beliefs with facts alone.

20:07

🤖 Debunking Conspiracy Theories with AI

The final paragraph introduces the concept of using AI chatbots to debunk conspiracy theories. It features an interview with Dr. Thomas Costello, who discusses the development of a 'debunk bot' that engages with conspiracy theorists using fact-based arguments. The segment covers the effectiveness of the chatbot in reducing belief in conspiracy theories, the potential for incorporating such technology into existing platforms, and the challenges of changing beliefs that are deeply ingrained. It also raises the risk of using AI for disinformation if not properly regulated.

25:08

🏁 The Role of Industry in AI Regulation

In the concluding remarks, the discussion turns to the role of the industry in self-regulating AI technologies. It highlights the importance of continuous improvement in AI models to prevent the spread of false content and the business imperative for companies to provide accurate and reliable information. The segment emphasizes the speed at which industry can adapt compared to legislation and the potential for industry-driven solutions to lead the way in addressing AI-related challenges.

Mindmap

Keywords

💡Deep fakes

Deep fakes refer to synthetic media in which a person's likeness has been manipulated or replaced using AI. In the video, deep fakes are discussed as a significant threat to disinformation, particularly in the context of elections. The script mentions California's legislation against deep fakes related to elections, highlighting the urgency and real-world implications of this technology.

💡Disinformation

Disinformation is the deliberate spread of false information to deceive and mislead. The video script explores how generative AI tools can amplify disinformation, undermining elections and freedoms. It is a central theme, with examples including AI-generated memes that fueled rumors with dangerous consequences.

💡Generative AI

Generative AI refers to AI systems that can create new content, such as text, images, or videos. The script discusses generative AI as tools that, while creative, can also be misused to generate disinformation, emphasizing the dual-use nature of this technology.

💡Watermarking

Watermarking in the context of AI refers to the practice of embedding a digital signature or mark into content to verify its authenticity. The script suggests watermarking as a potential solution to identify and combat deep fakes, though it also acknowledges the challenges in implementing such measures.

💡Election interference

Election interference is the manipulation or disruption of an electoral process. The video discusses how foreign governments use AI to create deep fakes and other forms of disinformation to interfere with elections, as detected by the Microsoft Threat Analysis Center.

💡Cyber-enabled influence threats

Cyber-enabled influence threats are attempts to sway public opinion or disrupt social order through digital means. The script mentions the Microsoft Threat Analysis Center's role in detecting and disrupting such threats, which include foreign attempts to influence US elections.

💡AI detection

AI detection refers to the use of AI algorithms to identify deep fakes and other forms of manipulated media. The script features a discussion on how AI can be used to track down deep fakes, suggesting that AI can be a tool for combating the very problems it helps create.

💡Digital threats

Digital threats encompass a range of online risks, including cyberattacks, data breaches, and disinformation campaigns. The video script discusses how organizations like the Microsoft Threat Analysis Center advise governments and companies on these threats, highlighting the importance of digital security in the modern era.

💡Provenance

Provenance, in the context of digital content, refers to the ability to verify the origin and history of a piece of content. The script discusses the importance of establishing provenance for AI-generated content, suggesting that cryptographic signatures or other forms of traceability could help authenticate content.

💡Conspiracy theories

Conspiracy theories are explanations of events or situations that invoke secret plots by powerful individuals or groups. The video script mentions the use of AI chatbots to debunk conspiracy theories by providing tailored facts and evidence, aiming to reduce belief in such theories.

💡Fact-based arguments

Fact-based arguments are persuasive statements grounded in verifiable facts. The script describes an AI chatbot designed to engage with conspiracy theorists using fact-based arguments to challenge and potentially change their beliefs, showcasing the potential of AI in promoting accurate information.

Highlights

California Governor Gavin Newsom signed a bill making it illegal to create and publish deep fakes related to upcoming elections.

Social media giants will be required to identify and remove deceptive material from next year in California.

The challenge of distinguishing between fake and real content is becoming increasingly difficult with generative AI tools.

AI memes of cats and ducks have fueled rumors with dangerous consequences, illustrating the impact of disinformation.

Beijing is pushing for AI to be watermarked to retain social order in a world of manipulated messages.

Taylor Swift was hijacked by the former president who shared fake images of her fans endorsing him, showing the personal impact of AI disinformation.

Microsoft's Threat Analysis Center in New York is at the forefront of defending against cyber-enabled influence threats to democracies.

Russian, Iranian, and Chinese attempts to influence the US election have been detected simultaneously for the first time.

The US election's dramatic nature is complicating outside interference, particularly affecting Russian strategies.

Iranian election influence activity has been detected via bogus websites, currently under FBI investigation.

China is using fake social media accounts to provoke reactions in the US public, increasing hostility on social media.

The debate over watermarking as a solution to generative AI disinformation is discussed, with concerns about potential manipulation.

The importance of preparing citizens with critical media skills to construct narratives and verify information sources is emphasized.

The challenge of regulating AI and deep fakes globally, especially when companies may relocate to less regulated countries, is highlighted.

The potential for AI to track down deep fakes and misinformation is explored, with AI being used to solve the problems it creates.

Researchers are developing AI tools to identify deep fakes by creating explanations for why content is identified as fake.

The limitations of current AI tools in detecting deep fakes, such as issues with finger detection and temporal inconsistencies, are discussed.

The potential for AI to rearchitect systems to provide a cryptographic signature of authenticity for content is considered.

The idea of using AI chatbots to deprogram conspiracy theorists by providing fact-based arguments is introduced.

The debunk bot, an AI chatbot, has shown success in reducing belief in conspiracy theories by 20% on average in experimental conversations.

Transcripts

play00:00

you are watching the context with me

play00:02

Christian Fraser it is time for our

play00:04

regular Thursday feature AI

play00:09

[Music]

play00:12

decoded welcome to the program freely

play00:15

available largely unregulated the

play00:18

creative tools of generative AI now

play00:21

amplifying the threat of disinformation

play00:23

how do we tackle it what can we trust

play00:26

and how are our enemies using it to

play00:29

undermine our elections and our freedoms

play00:32

this week Governor Gavin Nome signed a

play00:34

bill in California that makes it illegal

play00:36

to create and published deep fakes

play00:39

related to the upcoming election and

play00:41

from next year the social media Giants

play00:43

will be required to identify and remove

play00:46

any deceptive material it is the first

play00:49

state in the nation to pass such

play00:51

legislation is it the new Benchmark some

play00:54

of this stuff obviously fake some of it

play00:56

deigned to poke fun but look how these

play00:59

AI memes of cats and Ducks powered the

play01:02

pet eating Rumor Mill in America with

play01:05

dangerous

play01:06

consequences it is a problem too in

play01:09

China how does the Communist Party

play01:10

retain social order in a world where the

play01:12

message can be manipulated Beijing is

play01:15

pushing for all the AI to be watermarked

play01:18

and he's putting the onus on the

play01:19

creators and from politics to branding

play01:23

there is no briger brand than Taylor

play01:25

Swift hijacked by the former president

play01:28

who shared fake images of her fans

play01:31

endorsing him it affects us

play01:34

all with me as ever in the studio our

play01:37

regular commentators and AI presenters

play01:40

Stephanie ha is here and from Washington

play01:42

our good friend Miles Taylor who worked

play01:44

in National Security advising the former

play01:47

Trump Administration we'll talk to them

play01:49

both in a second but before we do that

play01:51

we're going to show you a short film one

play01:52

of the many false claims that has

play01:54

appeared online in recent months was a

play01:56

story that Cara Harris had been involved

play01:58

in a hit and run accident in 2011 that

play02:01

story was created by a Russian troll

play02:03

farm and was one of the many

play02:05

inflammatory stories Microsoft

play02:07

intercepted the threat analysis unit

play02:10

that does their work in New York is at

play02:12

the very Forefront in defending all our

play02:14

elections our AI correspondent March

play02:17

schlack has been to see it Time Square

play02:20

New York City an unlikely location for a

play02:24

secure facility which monitors attempts

play02:27

by Foreign governments to destabilize

play02:29

democracy it is however home to mtag the

play02:33

Microsoft threat analysis Center its job

play02:36

is to detect assess and disrupt cyber

play02:39

enabled influence threats to democracies

play02:43

worldwide the work that's carried out

play02:45

here is extremely sensitive we the very

play02:48

first people that have been permitted to

play02:50

film

play02:51

inside it's also the first time Russian

play02:54

Iranian and Chinese attempts to

play02:56

influence the US election have all been

play02:58

detected at once all three are in play

play03:02

and this is the first cycle where we've

play03:03

had all three that we can definitely

play03:05

point to individuals from this

play03:07

organization serve on a special

play03:09

presidential Committee in the Kremlin

play03:11

advis reports compiled by these analysts

play03:13

advise governments like the UK and us as

play03:17

well as private companies on digital

play03:19

threats this team has noticed that the

play03:21

dramatic nature of the US election is

play03:24

complicating attempts at outside

play03:26

interference the biggest impact of the

play03:29

switch of president uh Biden for vice

play03:31

president Harris has been it's really

play03:34

thrown the Russians so far off their

play03:36

game they really focused on Biden as

play03:38

somebody they needed to remove from

play03:40

office to get what they wanted in

play03:41

Ukraine Russian efforts have now pivoted

play03:43

to undermining the Harris Waltz campaign

play03:46

via a series of fake videos designed to

play03:49

provoke

play03:50

controversy these analysts were

play03:52

instrumental in detecting Iranian

play03:54

election influence activity via a series

play03:57

of bogus websites the FBI is now

play04:00

investigating this as well as Iranian

play04:02

hacking of the Trump campaign we found

play04:05

that in the source code for these websit

play04:07

they were doing was using AI to rewrite

play04:10

content from a real place and using that

play04:12

for the bulk of their website and then

play04:14

occasionally they would write real

play04:16

articles um when it was a very specific

play04:19

political point they were trying to make

play04:21

the third major player in this election

play04:23

interference is China using fake social

play04:25

media accounts to provoke a reaction in

play04:28

the US public experts are unconvinced

play04:31

these campaigns affect which way people

play04:34

actually vote but they worry they are

play04:36

successful in increasing hostility on

play04:39

social media Mark chisl BBC News yeah

play04:43

that gives you an idea of just how quick

play04:44

this is advancing Stephanie do you do

play04:47

you think we're almost at a point as the

play04:49

technology improves the creative

play04:51

technology that we're going to be very

play04:55

close very soon to not knowing the

play04:57

difference between fact and fiction it's

play05:00

getting harder and harder to detect a

play05:02

lot of the deep fake imagery audio is

play05:04

particularly very difficult to detect

play05:06

it's a lot easier to fake so yes I think

play05:08

we're right now possibly in the last us

play05:10

election where it's kind of easy to see

play05:12

when you're being manipulated and the

play05:14

the trick really is do you want to

play05:16

believe it because what this is all

play05:18

about is really hijacking your emotions

play05:21

and watermarks because that is often the

play05:23

the goto solution to this why would that

play05:26

not be the the answer to all the ills of

play05:29

generated generative AI I still wonder

play05:32

if there would be ways of manipulating

play05:34

even that but it's probably a pretty

play05:35

good start it's just that thing you

play05:37

always feel like you're playing

play05:38

whack-a-mole with these Technologies you

play05:40

know you do one thing and then it

play05:41

advances and you have to catch up again

play05:43

so we would probably start with

play05:45

watermarks and then there would be an

play05:47

advance and a kickback and we'd have to

play05:49

react to that and so on and so forth I

play05:51

think it's also about preparing citizens

play05:53

though to have the critical media skills

play05:56

that we all need to be able to construct

play05:58

narratives look at who is giving us

play06:00

information and just does it check with

play06:04

reality miles um I was saying to

play06:06

Stephanie this is a good step forward

play06:08

what's happened in California this week

play06:10

you've got the governor there putting

play06:12

the onus on the social media companies

play06:14

and on the creative companies to do

play06:17

something about this and particularly

play06:18

around the election and then Stephanie

play06:20

said to me well okay American companies

play06:23

regulated by American legislators why

play06:26

wouldn't they just go to

play06:28

China look I mean I think that's one of

play06:30

the concerns always when it comes to

play06:32

Tech regulation and and Christian you

play06:34

remember the debate well over encryption

play06:37

in the United States there was the San

play06:39

Bernardino terrorist attack uh you know

play06:42

almost 10 years ago now where the FBI

play06:44

could not get into the shooter's phone

play06:47

and it led to a big debate in the United

play06:49

States about these encrypted messaging

play06:51

apps like Telegram and signal and

play06:54

whether it should be legislated that

play06:56

those were forbidden in the United

play06:58

States opponents of those laws though

play07:00

said well sure you can outlaw them here

play07:04

but someone overseas is going to create

play07:05

the same apps and it's going to be

play07:07

really difficult to prevent people from

play07:09

using a version of it overseas we Face

play07:12

the same problem here with regulations

play07:14

around deep fake deep fakes and AI it's

play07:17

only as far as us legislation and law

play07:20

enforcement can reach that those types

play07:22

of things can be enforced so there is a

play07:25

big challenge here but also there's a

play07:27

domestic challenge about the first First

play07:29

Amendment implications and Free Speech

play07:31

implications and of course Governor

play07:33

nome's signing of that law has opened up

play07:35

that debate as well so there will be a

play07:37

lot of contention the next few years

play07:39

about how to get this right from a

play07:41

legislative and Regulatory standpoint

play07:43

the other thing that occurs to me and we

play07:44

talk about protecting Children online

play07:46

all the time on this program one of the

play07:49

issues the companies always come up

play07:50

against is finding the material and

play07:53

getting rid of it if you are having to

play07:56

find very good deep fake material

play08:00

that process becomes much more difficult

play08:02

doesn't it and how do we find a metric

play08:05

to to hold the social media companies

play08:07

and the online companies to

play08:09

task well I think Stephanie said

play08:11

something really important here which

play08:13

was the game of whack-a-mole you're

play08:15

playing if you think that watermarking

play08:18

you know basically sticking a putting a

play08:19

sticker on this content and saying this

play08:21

is fake if you think that's a solution

play08:23

it's going to be really hard to keep up

play08:25

a lot of the experts I talk to in AI say

play08:28

that maybe that a short-term solution

play08:30

but in the longer term you have to

play08:33

rearchitecturing

play08:56

at this place at this time and that

play08:59

can't be changed right it's tied to a

play09:01

public Ledger uh not that people can see

play09:03

your photos publicly but that's a

play09:05

cryptographic signature that can't be

play09:06

broken eventually all of our Tech will

play09:09

be signed with that Providence that says

play09:12

I am real and you'll know if it's not

play09:14

real because it won't have that point of

play09:16

creation certification but it's years

play09:19

before we're there and in the meantime a

play09:21

lot of difficult conversations are going

play09:23

to be had it's almost a supply chain

play09:26

approach or even a criminal approach

play09:27

when you have a chain of evidence and

play09:29

you have to be able to follow it all the

play09:31

way through and you can't tamper with it

play09:33

or when we had mad C disease here in the

play09:35

United Kingdom many years ago people

play09:37

suddenly wanted to know when they were

play09:39

going grocery shopping they wanted to

play09:40

buy some beef what farm did it come from

play09:43

and suddenly people realized they needed

play09:45

traceability all the way through the

play09:46

food chain so I'm wondering if there's a

play09:49

parallel there to help people understand

play09:51

all of the things that you're creating

play09:53

can have that encoded so you would

play09:55

always be able to know it's like

play09:56

following through like a painting when

play09:59

is a painting sold it might go through

play10:01

50 different hands if it's 400 years old

play10:04

you know before it finally ends up in

play10:05

the Met um where did it come from was it

play10:08

illegally bought you know Etc you you

play10:11

should be able to follow data through in

play10:13

the same way let's bring in someone uh

play10:15

who is working in in this field here in

play10:17

at the studio with us is Dr Christian

play10:19

SCH deit uh he is a senior research

play10:22

associate in machine learning at the

play10:23

University of Oxford he and his team are

play10:25

researching how to identify some of

play10:27

these deep fakes using AI welcome to the

play10:31

program um we were just talking about

play10:34

how quickly things are advancing to the

play10:36

point where to the naked eye it's

play10:38

becoming more difficult certainly with

play10:40

imagery what sort of Technology are you

play10:43

developing that makes that easier yes so

play10:45

Christian um I really like this

play10:47

discussion um I think um the solution to

play10:50

our problems of establishing Provence um

play10:52

of content um will involve both a lot of

play10:55

research but also wider adoption of

play10:57

existing Technologies so in terms of

play10:59

research I think the clip really brought

play11:01

home you know that um AI is being used

play11:04

to amplify the misinformation problem so

play11:06

let's use AI to solve it so some of the

play11:08

research that I do is about using AI to

play11:11

detect misinformation so you're using

play11:14

the AI to track down the Deep fake AI so

play11:18

basically yes so so what I did the

play11:19

summer spending um you know doing some

play11:21

research doing some research with BBC

play11:23

verify um and University of Oxford was

play11:25

um just you know when you have a picture

play11:27

for example um explain

play11:29

whether it is a deep fake or not let's

play11:31

bring one up I've got one um that I

play11:33

think you've looked at and people will

play11:35

be familiar with this it's it's the Pope

play11:37

in a puffer jacket which actually did

play11:39

get into some uh news streams around the

play11:42

time that this photo came out so

play11:43

although we're joking it did actually

play11:45

deceive quite a lot of people show me

play11:48

what you did with this yeah so exactly

play11:51

so you can see um the pope and the

play11:52

puffer jacket obviously from the context

play11:54

it's quite clear it's a deep fake right

play11:55

um and it's probably for entertainment

play11:57

purposes but a human expert for example

play11:59

BBC verify could look at this picture

play12:02

and could um find the details that are a

play12:04

bit off for example the spectacle seem

play12:06

to be fused into the cheeks or the

play12:08

crucifix doesn't quite attach to the

play12:10

chain right and so the question is um

play12:12

you see it's very important to have

play12:14

these explanations as well not just like

play12:16

a number of like this is 0.7% or 0.7

play12:20

deep fake or not but you need to have an

play12:21

explanation for why it is a deep fake so

play12:23

we now have ai tools that can create

play12:25

these explanations as well right um what

play12:28

something that you put on the desktop

play12:29

something that you could run a

play12:30

photograph through yeah potentially yes

play12:33

okay um but these tools still have a lot

play12:35

of failure cases and this is where we

play12:37

need more research okay um yeah where do

play12:40

they fail and why famously it's things

play12:43

like they can't get fingers right so you

play12:45

might get six fingers on a hand yes so

play12:48

this is a classic on videos for example

play12:50

right like you have some sort of

play12:51

temporal inconsistency so an object

play12:53

disappear suddenly for example right um

play12:56

but the problem is that these tools are

play12:58

trained on a lot of data and they're

play13:00

learning so-called features patterns

play13:03

right that help them to make these

play13:04

decisions now it can happen that

play13:06

sometimes these patterns are present in

play13:08

some images that are too far away from

play13:11

what it has seen during getting too

play13:13

technical on that can you explain that

play13:14

to people is it is it a pixel difference

play13:17

is it is it in the way I mean it's not

play13:19

in the way the image looks is it they're

play13:20

looking the the AI presumably is looking

play13:23

deeper into the image than that yes so

play13:25

the AI is actually taking in an image

play13:28

and then it is um projecting this into

play13:31

some very high dimensional space um and

play13:34

within this High dimensional space um

play13:37

basically um um you then do like a

play13:40

dimensionality reduction into a lower

play13:42

space and then in this lower space um

play13:44

what you can do is um you can um uh form

play13:47

these features okay and then if you have

play13:49

an image that it hasn't seen during

play13:50

training then these features um might

play13:53

not generalize to that image and then

play13:55

like you can have issues where um an um

play13:59

evokes some Impressions that that are

play14:01

that are wrong right so you see some

play14:03

Reflections or something and actually

play14:04

they are not deep fake but the Stephanie

play14:08

Stephanie mentions the photographs that

play14:10

they they struggle with I've got one

play14:12

here this is lonel Messi kissing the

play14:14

World Cup much to my shine but um but

play14:17

the but the um this one is

play14:21

real but the machine thought it was fake

play14:24

why yes so um so the machine might think

play14:28

so just because it maybe hasn't seen an

play14:30

image that's close enough to this

play14:32

picture in its training set right and as

play14:34

we always get new images in for example

play14:36

winning the World Cup Messi winning the

play14:37

World Cup was a new occasion um um it

play14:40

might think that for example some

play14:41

Reflections and the trophy um or the way

play14:44

Messi um holds his hand um or maybe the

play14:46

skin tone um aren't natural and the

play14:49

problem is we then get these

play14:50

explanations and these explanations can

play14:52

be very very convincing um but they're

play14:54

nevertheless wrong miles do you like

play14:57

this idea of AI tracking

play14:59

AI deep

play15:01

fakes I don't just like it Christian I

play15:04

love it we've got to use AI against AI

play15:06

to protect ourselves it's actually going

play15:08

to be our best asset and one of the

play15:10

things that's interesting that's

play15:12

happening right now is we always focus

play15:14

on who's developing the technology that

play15:16

could be used for bad but uh my fellow

play15:19

oxonian there on Set uh and and a lot of

play15:22

folks around the world are now investing

play15:24

time and resources into building

play15:26

companies on deep fake detection I mean

play15:28

there are companies in the United States

play15:30

like true pick and reality Defender that

play15:32

are exciting they're venture-backed a

play15:35

lot of people want to go work for them

play15:36

and what do those companies do they

play15:38

focus solely on trying to prove what is

play15:41

and isn't real and one of the things

play15:43

that's just become possible really only

play15:46

in the past few months is some of these

play15:47

Technologies are leveraging context

play15:50

awareness of the world to determine

play15:52

whether something's fake or real so

play15:54

these models aren't just looking at the

play15:55

image and saying it looks manipulated

play15:58

the models can also say well the Pope

play16:00

the past couple weeks has been on

play16:02

vacation in Italy there's no way this

play16:05

photo was just taken and he was wearing

play16:07

a puffer jacket and they can give you a

play16:09

confidence score and that's exciting are

play16:12

you incorporating that in your in your

play16:14

abely so this is incorporating wider

play16:15

context on where the content is found

play16:17

and when it is found and who is depicted

play16:20

so sort of semantic information

play16:21

absolutely yes it strikes me that the

play16:23

social media companies and the online

play16:24

companies have a vested interest in this

play16:26

because if you can't tell fact from

play16:28

fiction you get what's called a liar's

play16:30

dividend right that that actually you

play16:32

become a disruptor you you you poison

play16:34

the we so much that actually No One

play16:36

Believes anything and that's not good

play16:39

for a social media model that makes

play16:40

their money from from spreading news and

play16:43

and informing people when it just raises

play16:45

the question of what social media is for

play16:48

right so it was quite exciting at first

play16:50

when it was this new thing and you could

play16:51

stay in touch with your friends and then

play16:53

a lot of people journalists would use

play16:54

certain tools to keep up with the news

play16:56

and get breaking news fast but once that

play16:59

starts feeling like actually they're

play17:00

just reading your data or you're looking

play17:03

for news to get it fast but it's not

play17:05

actually reliable and it's being flooded

play17:06

information ecosystems being flooded all

play17:08

the time eventually you might just turn

play17:10

off and that's without even going into

play17:12

the mental health implications of being

play17:13

on these sites right which we know are

play17:15

really harmful for people so I wonder

play17:18

sometimes if we might be having lived

play17:20

through the Golden Age of social media

play17:22

and we're now entering this new phase

play17:24

and if it isn't cleaned up people could

play17:26

just end up leaving it or only going to

play17:29

the way that you would read the national

play17:30

Inquirer in the United States to read

play17:32

about aliens or something are the big

play17:34

developers interested in what you're

play17:35

doing yes absolutely so so the summer my

play17:38

collaboration was with a big tech

play17:40

company in fact um so there is a lot of

play17:42

interest in these Solutions actually the

play17:44

interest goes even further so what we

play17:46

can do now we can proactively try to

play17:48

look for deep fakes and disinformation

play17:50

in social media platforms right using

play17:52

autonomous agents so I think this is

play17:54

where things are going and then we can

play17:55

establish this situational Awareness on

play17:57

a sort of global scale H which miles

play17:59

also I've got to also ask you is this

play18:01

the right environment to be developing

play18:03

the right country do you get the support

play18:05

for stuff like this I think so yes yeah

play18:08

yeah generally yes I think UK is a great

play18:10

place well that's encouraging isn't it

play18:13

uh on that note um one of the problems

play18:16

here is not so much the Deep fake news

play18:18

as the disinformation that is spread by

play18:20

conspiracy theories who are creating

play18:22

material they believe to be true what if

play18:24

we could bring the conspiracy theorist

play18:27

from the Shadows and back to the light

play18:30

coming after the break we'll hear about

play18:32

the AI chatbot that is deprogramming the

play18:36

people who have disappeared down the

play18:38

rabbit holes we'll be right back stay

play18:41

with

play18:42

us welcome back the moonlandings that

play18:46

never happened the covid microchip that

play18:48

was injected into your arm the pizza

play18:51

pedophile ring in Washington conspiracy

play18:54

theories abound often with dangerous

play18:56

consequences many have tried reasoning

play18:58

with the conspiracy theorists but to no

play19:01

avail how do you talk to someone so

play19:03

convinced of what they

play19:05

believe who is equally suspicious of why

play19:08

you would even be challenging those

play19:10

beliefs well researchers have set about

play19:12

creating a chatbot to do just that it

play19:14

draws on a vast array of information to

play19:17

converse with these people using bespoke

play19:20

fact-based arguments and the debunk bot

play19:24

as it's known is proving remarkably

play19:26

successful joining us on Zoom is the

play19:28

lead lead researcher Dr Thomas Costello

play19:30

he's the associate professor in

play19:33

Psychology at the University of

play19:34

Washington you're very welcome to the

play19:36

program tell us what the demystified

play19:39

chatbot

play19:40

does yeah sure thanks I'm happy to be

play19:43

here so the the idea is that studying

play19:45

conspiracy theorists and trying to

play19:47

debunk them has been pretty hard until

play19:49

now because there are so many different

play19:51

conspiracy theories out there in the

play19:52

world and you need all of all of this

play19:55

like that you need to look across this

play19:56

whole Corpus of information uh

play19:59

comprehensively to debunk all of them

play20:01

and study them in a systematic way and

play20:04

large language models these AI tools are

play20:06

perfect for doing for doing just that um

play20:09

so so we ran an experiment where we had

play20:11

people come in and uh describe a

play20:13

conspiracy theory that they believed in

play20:15

and felt strongly about uh the AI

play20:17

summarized it for them and they rated it

play20:19

and then they entered into a

play20:20

conversation with this debunk bot uh

play20:23

where so it was given exactly what they

play20:25

believed and and programmed set up to to

play20:28

persuade them away from The Conspiracy

play20:30

Theory using facts and evidence what we

play20:32

found at the end of this about 8 minute

play20:34

conversation this back and forth was

play20:36

that people uh conspiracy theorists

play20:38

reduced their their beliefs in their

play20:40

chosen conspiracy by about 20% on

play20:42

average and actually one in four people

play20:44

came out the under the other end of that

play20:46

conversation actively uncertain towards

play20:49

their conspiracy so they were newly

play20:51

skeptical and and so is he the basis

play20:53

that they don't know where to go to get

play20:55

this information and they are suspicious

play20:58

of anybody that might have the answers

play21:00

to the things that concern them yeah I

play21:04

mean that that could be part of it I I

play21:05

think really it's just uh being provided

play21:08

with facts and information that's

play21:09

tailored to exactly what they and how do

play21:12

you how do you deploy it because I don't

play21:14

I don't I can't imagine that conspiracy

play21:16

theorists are wandering around saying

play21:18

disprove the conspiracy theory that I

play21:20

believe to be true yeah no I mean that's

play21:22

a great question I think it's one that

play21:24

uh like I'd be curious to hear others

play21:26

answers about too in the in the studies

play21:28

we PID people to come and do it um that

play21:31

said I'm optimistic about uh you know

play21:33

the truth motivations of human beings in

play21:35

general I think people want to know

play21:37

what's true and so if there's a tool

play21:39

that that they trust to do that then

play21:41

then all the better yeah miles can you

play21:42

see a purpose for this in

play21:44

America yeah I mean I can certainly see

play21:47

this principle being incorporated into a

play21:49

lot of technology I mean a lot of us

play21:51

already every day use things like chat

play21:53

GPT and I'll actually give you an

play21:55

example Christian of chat

play21:57

GPT disproving something for me so

play22:01

there's a famous uh Winston Churchill

play22:03

quote a lie gets halfway around the

play22:05

world before the truth can get its pants

play22:07

on no quote better describes the

play22:10

conversation we're having as how fast

play22:12

this disinformation spreads well guess

play22:14

what I put that into chat GPT before I

play22:16

did a presentation on this subject and

play22:18

said hold on a second that's actually

play22:20

not a quote from Winston Church Hill

play22:22

it's a quote from jonath Jonathan Swift

play22:24

in the 1700s so AI helped me disprove

play22:28

that misinformation that's been around

play22:30

for years so yes I think this is

play22:32

important and it should be integrated

play22:34

into these Technologies and Christian is

play22:36

this where the Two Worlds Collide

play22:37

because presumably there are conspiracy

play22:39

theorists who believe something so

play22:41

fervently that they put out AI generated

play22:44

material as well so if you can deal if

play22:46

you can deal with the conspiracy theory

play22:48

maybe you can stop the prevalence of of

play22:50

fake material yeah potentially um I must

play22:54

say though that um so this study was

play22:57

done in the board atory condition so it

play23:00

will be very interesting to see whether

play23:02

these results also translate into the

play23:04

real world um and then also the um large

play23:08

language models that were used they were

play23:10

safety fine-tuned um so that means you

play23:12

know they were sort of programmed to say

play23:14

the truth and so on um and so if that

play23:17

safety fine-tuning is not there you know

play23:19

um they could be used for something we

play23:21

call Interactive disinformation so they

play23:23

could be used to convince people of

play23:25

things that are not true so that's the

play23:27

big risk that see here and Thomas I've

play23:30

got a question for you I'm curious just

play23:33

about how much having good information

play23:36

actually changes people's mind and the

play23:37

example I would give is smoking we've

play23:39

known for decades smoking is bad for you

play23:42

everybody agrees we've got all the data

play23:43

to back it up we put labels on it really

play23:45

clearly and yet people still smoke and

play23:47

when you talk to a smoker and try to

play23:49

persuade them to give it up because you

play23:51

care about them they will sometimes

play23:53

really entrench in it's really hard to

play23:56

break not just because it's addictive

play23:58

but but because they maybe want to smoke

play24:00

so I see this parallel perhaps with

play24:03

conspiracy theories in terms of we have

play24:05

beliefs and information is not always

play24:07

enough to change it it's not just about

play24:09

facts it's about something

play24:11

else yeah yeah that's a great point I

play24:13

mean I think that the case of smoking or

play24:15

or drug other kinds of drug use um we we

play24:18

know that it's bad for us when we start

play24:20

doing it um they're fundamentally not

play24:22

about information whereas whereas

play24:24

beliefs and particularly conspiracy

play24:26

beliefs are are often descriptive

play24:28

they're accounts of what went on in the

play24:30

world that uh you know Al-Qaeda didn't

play24:33

uh put together the 911 terrorist

play24:35

attacks it was the government and and so

play24:37

dealing with claims about the world um

play24:41

is something that I think is conducive

play24:42

to informational persuasion in a way

play24:44

that that maybe uh like uh nicotine use

play24:47

is not yeah I mean M we we focus so much

play24:50

on the legislating it's it's the the the

play24:52

questions I always ask you how far

play24:54

behind a congress on that what a state

play24:56

house is doing about AI legis ation but

play24:59

but what we've shown tonight is actually

play25:01

that it's the industry

play25:03

itself that is that is forcing the

play25:05

change maybe it's not legislation

play25:07

because legislation is always one step

play25:10

behind well well Christian I'm going to

play25:12

give you an embarrassing admission that

play25:14

proves that point so I was at dinner

play25:16

last night with one of the creators of

play25:18

chat GPT gpt3 one of the earlier

play25:21

versions she worked for Sam Alman we

play25:23

were talking about the technology and I

play25:25

complained to her I said you know I was

play25:27

teaching a course at University of

play25:28

Pennsylvania and I got lazy and I was

play25:30

supposed to come up with a list of 25

play25:32

books on a subject for my students I

play25:34

said I'm going to look it up on GPT what

play25:36

are the best 25 books produced it

play25:38

emailed it out well guess what my

play25:39

students emailed me and said all of

play25:41

those books are fake gpt3 came up with a

play25:44

bunch of fake books and I said this to

play25:46

her and she said well yeah and that was

play25:48

bad and it gave chat gbt a bad

play25:50

reputation in your mind and that's why

play25:52

we kept improving the models is we don't

play25:54

want to serve you up false content

play25:56

because you won't want to work with this

play25:58

product and so that may not be

play25:59

heartening to everyone but certainly

play26:01

those industry improvements move a lot

play26:04

faster than legislation because there's

play26:06

a business imperative to get it right

play26:08

yeah that indeed is the vested interest

play26:10

that I see uh for a lot of the online

play26:12

companies and and of course the AI

play26:14

companies that are developing this stuff

play26:16

uh we're out of time uh it flies by

play26:17

doesn't it just to remind you that all

play26:19

these episodes are on the AI decoded

play26:21

playlist on YouTube some good ones on

play26:23

there as well so have a look at those uh

play26:25

thank you to Dr schro Dr Costello miles

play26:28

and of course to Stephanie let's do it

play26:30

again same time next week thanks for

play26:32

watching

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
Deep FakesDisinformationElection IntegrityAI EthicsCalifornia LegislationSocial MediaCybersecurityFake DetectionConspiracy TheoriesAI Chatbots
¿Necesitas un resumen en inglés?