Russia and Iran use AI to target US election | BBC News
Summary
TLDRThe transcript from 'AI Decoded' discusses the threat of generative AI in spreading disinformation, with a focus on deepfakes and their impact on democracy and elections. It covers California's new law against deepfakes in elections, watermarking as a potential solution, and the role of social media companies in regulation. Experts weigh in on the challenges of detecting deepfakes and the importance of critical media literacy. The show also features AI's role in debunking conspiracy theories through chatbots, highlighting the potential of AI in combating the spread of false information.
Takeaways
- 📜 California has passed a bill making it illegal to create and publish deep fakes related to upcoming elections, with social media giants required to identify and remove deceptive material from next year.
- 🐱 The script discusses the problem of AI-generated memes, like those of cats and ducks, which have fueled rumors with dangerous consequences.
- 🏛️ Beijing is pushing for AI to be watermarked to help retain social order, placing responsibility on creators to ensure the authenticity of AI-generated content.
- 🎤 The script mentions how AI has been used to hijack the image of celebrities like Taylor Swift, who was falsely shown endorsing a political candidate.
- 🌐 The Microsoft Threat Analysis Center in New York City works to detect and disrupt cyber-enabled influence threats to democracies worldwide.
- 🔍 The center has detected attempts by Russia, Iran, and China to influence the US election, with each nation using different tactics such as fake videos and websites.
- 🤖 AI is being used to combat the spread of misinformation, with researchers developing tools to detect deep fakes and provide explanations for their authenticity.
- 💡 The discussion highlights the need for watermarking as a potential solution to identify genuine content, but also acknowledges the challenges in keeping up with advancing technologies.
- 🌐 There's a call for a global approach to traceability in AI-generated content, similar to supply chain management, to ensure the origin and authenticity of digital creations.
- 🤖 The script introduces a chatbot designed to deprogram individuals who believe in conspiracy theories by engaging them in fact-based conversations.
Q & A
What is the significance of the bill signed by Governor Gavin Newsom in California regarding deep fakes?
-The bill signed by Governor Gavin Newsom makes it illegal to create and publish deep fakes related to upcoming elections. Starting next year, social media giants will be required to identify and remove any deceptive material, marking California as the first state in the nation to pass such legislation.
How does generative AI amplify the threat of disinformation?
-Generative AI tools, which are largely unregulated and freely available, have the potential to create convincing fake content, including deep fakes and manipulated media, which can be used to spread disinformation and undermine trust in elections and freedoms.
What is the role of the Microsoft Threat Analysis Center in New York City?
-The Microsoft Threat Analysis Center, located in New York City, is a secure facility that monitors attempts by foreign governments to destabilize democracy. It detects, assesses, and disrupts cyber-enabled influence threats to democracies worldwide.
How do the analysts at the Microsoft Threat Analysis Center detect foreign influence attempts on US elections?
-Analysts at the Microsoft Threat Analysis Center detect foreign influence attempts by analyzing data and reports, identifying patterns, and advising governments and private companies on digital threats. They have detected simultaneous attempts by Russia, Iran, and China to influence the US election.
What challenges do AI tools face in detecting deep fakes?
-AI tools face challenges in detecting deep fakes due to the continuous advancement of generative AI technologies, which can create increasingly realistic fake content. Additionally, AI tools may struggle with images or videos that are too far away from what they have seen during training, leading to potential misclassifications.
What is the potential solution to the deep fake problem discussed in the script?
-One potential solution discussed is the use of AI to detect misinformation and deep fakes. This involves training AI tools to identify inconsistencies and anomalies in content, and providing explanations for why certain content is flagged as a deep fake.
Why is watermarking proposed as a solution to the deep fake problem?
-Watermarking is proposed as a solution because it can provide a form of traceability and authenticity to digital content. It would allow for the identification of original and verified content, helping to distinguish it from deep fakes.
How does the concept of 'situational awareness' relate to the detection of deep fakes?
-Situational awareness in the context of deep fake detection refers to the ability to proactively monitor and analyze content on social media platforms using AI tools. This allows for the establishment of a global scale understanding of where and when disinformation is being spread.
What is the 'debunk bot' mentioned in the script and how does it work?
-The 'debunk bot' is an AI chatbot designed to converse with conspiracy theorists using fact-based arguments to debunk their beliefs. It draws on a vast array of information to engage in conversations and has shown success in reducing conspiracy beliefs by an average of 20% in experimental settings.
How does the debunk bot approach the challenge of changing deeply held beliefs?
-The debunk bot approaches the challenge by providing tailored information and facts directly related to the specific conspiracy theories that individuals hold. It engages in a conversation that summarizes and challenges the beliefs, using evidence to persuade users away from their conspiracy theories.
Outlines
📜 AI and the Threat of Disinformation
The segment begins with a discussion on the challenges posed by generative AI in creating and disseminating disinformation. It highlights the recent legislation in California that criminalizes the creation of deep fakes related to elections and the upcoming requirement for social media platforms to identify and remove deceptive content. The conversation also explores the global impact, including China's efforts to watermark AI content and the manipulation of public figures like Taylor Swift. The segment emphasizes the role of AI in both creating and combating disinformation, with a focus on the importance of critical media literacy and the challenges of regulating technology across borders.
🔍 Deep Fakes: Detection and Regulation Challenges
This paragraph delves into the difficulty of detecting deep fake imagery and audio, and the potential solutions such as watermarking to authenticate content. It discusses the ongoing 'whack-a-mole' game with technology, where advancements in detection are met with new methods of manipulation. The conversation includes the role of social media companies in regulating content, the challenges of enforcing regulations globally, and the importance of preparing citizens with critical media skills. It also touches on the debate over First Amendment rights and free speech in the context of regulating AI-generated content.
🕵️♂️ AI Tools for Detecting Deep Fakes
The focus of this section is on the development of AI tools to detect deep fakes. It features a discussion with Dr. Christian Schroedl from the University of Oxford, who is researching methods to identify deep fakes using AI. The conversation includes the use of AI to track down AI-generated misinformation, the limitations of current detection tools, and the need for further research. Examples of deep fake images, such as the Pope in a puffer jacket, are used to illustrate the challenges in detection and the potential of AI in providing explanations for why certain images are deemed deep fakes.
🌐 Social Media and the Spread of Disinformation
This segment discusses the role of social media in the spread of disinformation and the vested interest companies have in maintaining a reliable information ecosystem. It highlights the potential of AI to not only create but also combat deep fakes and misinformation. The conversation includes the potential of context-aware AI to determine the authenticity of content and the importance of situational awareness on a global scale. The segment also touches on the challenges of disinformation spread by conspiracy theories and the difficulty of changing deeply held beliefs with facts alone.
🤖 Debunking Conspiracy Theories with AI
The final paragraph introduces the concept of using AI chatbots to debunk conspiracy theories. It features an interview with Dr. Thomas Costello, who discusses the development of a 'debunk bot' that engages with conspiracy theorists using fact-based arguments. The segment covers the effectiveness of the chatbot in reducing belief in conspiracy theories, the potential for incorporating such technology into existing platforms, and the challenges of changing beliefs that are deeply ingrained. It also raises the risk of using AI for disinformation if not properly regulated.
🏁 The Role of Industry in AI Regulation
In the concluding remarks, the discussion turns to the role of the industry in self-regulating AI technologies. It highlights the importance of continuous improvement in AI models to prevent the spread of false content and the business imperative for companies to provide accurate and reliable information. The segment emphasizes the speed at which industry can adapt compared to legislation and the potential for industry-driven solutions to lead the way in addressing AI-related challenges.
Mindmap
Keywords
💡Deep fakes
💡Disinformation
💡Generative AI
💡Watermarking
💡Election interference
💡Cyber-enabled influence threats
💡AI detection
💡Digital threats
💡Provenance
💡Conspiracy theories
💡Fact-based arguments
Highlights
California Governor Gavin Newsom signed a bill making it illegal to create and publish deep fakes related to upcoming elections.
Social media giants will be required to identify and remove deceptive material from next year in California.
The challenge of distinguishing between fake and real content is becoming increasingly difficult with generative AI tools.
AI memes of cats and ducks have fueled rumors with dangerous consequences, illustrating the impact of disinformation.
Beijing is pushing for AI to be watermarked to retain social order in a world of manipulated messages.
Taylor Swift was hijacked by the former president who shared fake images of her fans endorsing him, showing the personal impact of AI disinformation.
Microsoft's Threat Analysis Center in New York is at the forefront of defending against cyber-enabled influence threats to democracies.
Russian, Iranian, and Chinese attempts to influence the US election have been detected simultaneously for the first time.
The US election's dramatic nature is complicating outside interference, particularly affecting Russian strategies.
Iranian election influence activity has been detected via bogus websites, currently under FBI investigation.
China is using fake social media accounts to provoke reactions in the US public, increasing hostility on social media.
The debate over watermarking as a solution to generative AI disinformation is discussed, with concerns about potential manipulation.
The importance of preparing citizens with critical media skills to construct narratives and verify information sources is emphasized.
The challenge of regulating AI and deep fakes globally, especially when companies may relocate to less regulated countries, is highlighted.
The potential for AI to track down deep fakes and misinformation is explored, with AI being used to solve the problems it creates.
Researchers are developing AI tools to identify deep fakes by creating explanations for why content is identified as fake.
The limitations of current AI tools in detecting deep fakes, such as issues with finger detection and temporal inconsistencies, are discussed.
The potential for AI to rearchitect systems to provide a cryptographic signature of authenticity for content is considered.
The idea of using AI chatbots to deprogram conspiracy theorists by providing fact-based arguments is introduced.
The debunk bot, an AI chatbot, has shown success in reducing belief in conspiracy theories by 20% on average in experimental conversations.
Transcripts
you are watching the context with me
Christian Fraser it is time for our
regular Thursday feature AI
[Music]
decoded welcome to the program freely
available largely unregulated the
creative tools of generative AI now
amplifying the threat of disinformation
how do we tackle it what can we trust
and how are our enemies using it to
undermine our elections and our freedoms
this week Governor Gavin Nome signed a
bill in California that makes it illegal
to create and published deep fakes
related to the upcoming election and
from next year the social media Giants
will be required to identify and remove
any deceptive material it is the first
state in the nation to pass such
legislation is it the new Benchmark some
of this stuff obviously fake some of it
deigned to poke fun but look how these
AI memes of cats and Ducks powered the
pet eating Rumor Mill in America with
dangerous
consequences it is a problem too in
China how does the Communist Party
retain social order in a world where the
message can be manipulated Beijing is
pushing for all the AI to be watermarked
and he's putting the onus on the
creators and from politics to branding
there is no briger brand than Taylor
Swift hijacked by the former president
who shared fake images of her fans
endorsing him it affects us
all with me as ever in the studio our
regular commentators and AI presenters
Stephanie ha is here and from Washington
our good friend Miles Taylor who worked
in National Security advising the former
Trump Administration we'll talk to them
both in a second but before we do that
we're going to show you a short film one
of the many false claims that has
appeared online in recent months was a
story that Cara Harris had been involved
in a hit and run accident in 2011 that
story was created by a Russian troll
farm and was one of the many
inflammatory stories Microsoft
intercepted the threat analysis unit
that does their work in New York is at
the very Forefront in defending all our
elections our AI correspondent March
schlack has been to see it Time Square
New York City an unlikely location for a
secure facility which monitors attempts
by Foreign governments to destabilize
democracy it is however home to mtag the
Microsoft threat analysis Center its job
is to detect assess and disrupt cyber
enabled influence threats to democracies
worldwide the work that's carried out
here is extremely sensitive we the very
first people that have been permitted to
film
inside it's also the first time Russian
Iranian and Chinese attempts to
influence the US election have all been
detected at once all three are in play
and this is the first cycle where we've
had all three that we can definitely
point to individuals from this
organization serve on a special
presidential Committee in the Kremlin
advis reports compiled by these analysts
advise governments like the UK and us as
well as private companies on digital
threats this team has noticed that the
dramatic nature of the US election is
complicating attempts at outside
interference the biggest impact of the
switch of president uh Biden for vice
president Harris has been it's really
thrown the Russians so far off their
game they really focused on Biden as
somebody they needed to remove from
office to get what they wanted in
Ukraine Russian efforts have now pivoted
to undermining the Harris Waltz campaign
via a series of fake videos designed to
provoke
controversy these analysts were
instrumental in detecting Iranian
election influence activity via a series
of bogus websites the FBI is now
investigating this as well as Iranian
hacking of the Trump campaign we found
that in the source code for these websit
they were doing was using AI to rewrite
content from a real place and using that
for the bulk of their website and then
occasionally they would write real
articles um when it was a very specific
political point they were trying to make
the third major player in this election
interference is China using fake social
media accounts to provoke a reaction in
the US public experts are unconvinced
these campaigns affect which way people
actually vote but they worry they are
successful in increasing hostility on
social media Mark chisl BBC News yeah
that gives you an idea of just how quick
this is advancing Stephanie do you do
you think we're almost at a point as the
technology improves the creative
technology that we're going to be very
close very soon to not knowing the
difference between fact and fiction it's
getting harder and harder to detect a
lot of the deep fake imagery audio is
particularly very difficult to detect
it's a lot easier to fake so yes I think
we're right now possibly in the last us
election where it's kind of easy to see
when you're being manipulated and the
the trick really is do you want to
believe it because what this is all
about is really hijacking your emotions
and watermarks because that is often the
the goto solution to this why would that
not be the the answer to all the ills of
generated generative AI I still wonder
if there would be ways of manipulating
even that but it's probably a pretty
good start it's just that thing you
always feel like you're playing
whack-a-mole with these Technologies you
know you do one thing and then it
advances and you have to catch up again
so we would probably start with
watermarks and then there would be an
advance and a kickback and we'd have to
react to that and so on and so forth I
think it's also about preparing citizens
though to have the critical media skills
that we all need to be able to construct
narratives look at who is giving us
information and just does it check with
reality miles um I was saying to
Stephanie this is a good step forward
what's happened in California this week
you've got the governor there putting
the onus on the social media companies
and on the creative companies to do
something about this and particularly
around the election and then Stephanie
said to me well okay American companies
regulated by American legislators why
wouldn't they just go to
China look I mean I think that's one of
the concerns always when it comes to
Tech regulation and and Christian you
remember the debate well over encryption
in the United States there was the San
Bernardino terrorist attack uh you know
almost 10 years ago now where the FBI
could not get into the shooter's phone
and it led to a big debate in the United
States about these encrypted messaging
apps like Telegram and signal and
whether it should be legislated that
those were forbidden in the United
States opponents of those laws though
said well sure you can outlaw them here
but someone overseas is going to create
the same apps and it's going to be
really difficult to prevent people from
using a version of it overseas we Face
the same problem here with regulations
around deep fake deep fakes and AI it's
only as far as us legislation and law
enforcement can reach that those types
of things can be enforced so there is a
big challenge here but also there's a
domestic challenge about the first First
Amendment implications and Free Speech
implications and of course Governor
nome's signing of that law has opened up
that debate as well so there will be a
lot of contention the next few years
about how to get this right from a
legislative and Regulatory standpoint
the other thing that occurs to me and we
talk about protecting Children online
all the time on this program one of the
issues the companies always come up
against is finding the material and
getting rid of it if you are having to
find very good deep fake material
that process becomes much more difficult
doesn't it and how do we find a metric
to to hold the social media companies
and the online companies to
task well I think Stephanie said
something really important here which
was the game of whack-a-mole you're
playing if you think that watermarking
you know basically sticking a putting a
sticker on this content and saying this
is fake if you think that's a solution
it's going to be really hard to keep up
a lot of the experts I talk to in AI say
that maybe that a short-term solution
but in the longer term you have to
rearchitecturing
at this place at this time and that
can't be changed right it's tied to a
public Ledger uh not that people can see
your photos publicly but that's a
cryptographic signature that can't be
broken eventually all of our Tech will
be signed with that Providence that says
I am real and you'll know if it's not
real because it won't have that point of
creation certification but it's years
before we're there and in the meantime a
lot of difficult conversations are going
to be had it's almost a supply chain
approach or even a criminal approach
when you have a chain of evidence and
you have to be able to follow it all the
way through and you can't tamper with it
or when we had mad C disease here in the
United Kingdom many years ago people
suddenly wanted to know when they were
going grocery shopping they wanted to
buy some beef what farm did it come from
and suddenly people realized they needed
traceability all the way through the
food chain so I'm wondering if there's a
parallel there to help people understand
all of the things that you're creating
can have that encoded so you would
always be able to know it's like
following through like a painting when
is a painting sold it might go through
50 different hands if it's 400 years old
you know before it finally ends up in
the Met um where did it come from was it
illegally bought you know Etc you you
should be able to follow data through in
the same way let's bring in someone uh
who is working in in this field here in
at the studio with us is Dr Christian
SCH deit uh he is a senior research
associate in machine learning at the
University of Oxford he and his team are
researching how to identify some of
these deep fakes using AI welcome to the
program um we were just talking about
how quickly things are advancing to the
point where to the naked eye it's
becoming more difficult certainly with
imagery what sort of Technology are you
developing that makes that easier yes so
Christian um I really like this
discussion um I think um the solution to
our problems of establishing Provence um
of content um will involve both a lot of
research but also wider adoption of
existing Technologies so in terms of
research I think the clip really brought
home you know that um AI is being used
to amplify the misinformation problem so
let's use AI to solve it so some of the
research that I do is about using AI to
detect misinformation so you're using
the AI to track down the Deep fake AI so
basically yes so so what I did the
summer spending um you know doing some
research doing some research with BBC
verify um and University of Oxford was
um just you know when you have a picture
for example um explain
whether it is a deep fake or not let's
bring one up I've got one um that I
think you've looked at and people will
be familiar with this it's it's the Pope
in a puffer jacket which actually did
get into some uh news streams around the
time that this photo came out so
although we're joking it did actually
deceive quite a lot of people show me
what you did with this yeah so exactly
so you can see um the pope and the
puffer jacket obviously from the context
it's quite clear it's a deep fake right
um and it's probably for entertainment
purposes but a human expert for example
BBC verify could look at this picture
and could um find the details that are a
bit off for example the spectacle seem
to be fused into the cheeks or the
crucifix doesn't quite attach to the
chain right and so the question is um
you see it's very important to have
these explanations as well not just like
a number of like this is 0.7% or 0.7
deep fake or not but you need to have an
explanation for why it is a deep fake so
we now have ai tools that can create
these explanations as well right um what
something that you put on the desktop
something that you could run a
photograph through yeah potentially yes
okay um but these tools still have a lot
of failure cases and this is where we
need more research okay um yeah where do
they fail and why famously it's things
like they can't get fingers right so you
might get six fingers on a hand yes so
this is a classic on videos for example
right like you have some sort of
temporal inconsistency so an object
disappear suddenly for example right um
but the problem is that these tools are
trained on a lot of data and they're
learning so-called features patterns
right that help them to make these
decisions now it can happen that
sometimes these patterns are present in
some images that are too far away from
what it has seen during getting too
technical on that can you explain that
to people is it is it a pixel difference
is it is it in the way I mean it's not
in the way the image looks is it they're
looking the the AI presumably is looking
deeper into the image than that yes so
the AI is actually taking in an image
and then it is um projecting this into
some very high dimensional space um and
within this High dimensional space um
basically um um you then do like a
dimensionality reduction into a lower
space and then in this lower space um
what you can do is um you can um uh form
these features okay and then if you have
an image that it hasn't seen during
training then these features um might
not generalize to that image and then
like you can have issues where um an um
evokes some Impressions that that are
that are wrong right so you see some
Reflections or something and actually
they are not deep fake but the Stephanie
Stephanie mentions the photographs that
they they struggle with I've got one
here this is lonel Messi kissing the
World Cup much to my shine but um but
the but the um this one is
real but the machine thought it was fake
why yes so um so the machine might think
so just because it maybe hasn't seen an
image that's close enough to this
picture in its training set right and as
we always get new images in for example
winning the World Cup Messi winning the
World Cup was a new occasion um um it
might think that for example some
Reflections and the trophy um or the way
Messi um holds his hand um or maybe the
skin tone um aren't natural and the
problem is we then get these
explanations and these explanations can
be very very convincing um but they're
nevertheless wrong miles do you like
this idea of AI tracking
AI deep
fakes I don't just like it Christian I
love it we've got to use AI against AI
to protect ourselves it's actually going
to be our best asset and one of the
things that's interesting that's
happening right now is we always focus
on who's developing the technology that
could be used for bad but uh my fellow
oxonian there on Set uh and and a lot of
folks around the world are now investing
time and resources into building
companies on deep fake detection I mean
there are companies in the United States
like true pick and reality Defender that
are exciting they're venture-backed a
lot of people want to go work for them
and what do those companies do they
focus solely on trying to prove what is
and isn't real and one of the things
that's just become possible really only
in the past few months is some of these
Technologies are leveraging context
awareness of the world to determine
whether something's fake or real so
these models aren't just looking at the
image and saying it looks manipulated
the models can also say well the Pope
the past couple weeks has been on
vacation in Italy there's no way this
photo was just taken and he was wearing
a puffer jacket and they can give you a
confidence score and that's exciting are
you incorporating that in your in your
abely so this is incorporating wider
context on where the content is found
and when it is found and who is depicted
so sort of semantic information
absolutely yes it strikes me that the
social media companies and the online
companies have a vested interest in this
because if you can't tell fact from
fiction you get what's called a liar's
dividend right that that actually you
become a disruptor you you you poison
the we so much that actually No One
Believes anything and that's not good
for a social media model that makes
their money from from spreading news and
and informing people when it just raises
the question of what social media is for
right so it was quite exciting at first
when it was this new thing and you could
stay in touch with your friends and then
a lot of people journalists would use
certain tools to keep up with the news
and get breaking news fast but once that
starts feeling like actually they're
just reading your data or you're looking
for news to get it fast but it's not
actually reliable and it's being flooded
information ecosystems being flooded all
the time eventually you might just turn
off and that's without even going into
the mental health implications of being
on these sites right which we know are
really harmful for people so I wonder
sometimes if we might be having lived
through the Golden Age of social media
and we're now entering this new phase
and if it isn't cleaned up people could
just end up leaving it or only going to
the way that you would read the national
Inquirer in the United States to read
about aliens or something are the big
developers interested in what you're
doing yes absolutely so so the summer my
collaboration was with a big tech
company in fact um so there is a lot of
interest in these Solutions actually the
interest goes even further so what we
can do now we can proactively try to
look for deep fakes and disinformation
in social media platforms right using
autonomous agents so I think this is
where things are going and then we can
establish this situational Awareness on
a sort of global scale H which miles
also I've got to also ask you is this
the right environment to be developing
the right country do you get the support
for stuff like this I think so yes yeah
yeah generally yes I think UK is a great
place well that's encouraging isn't it
uh on that note um one of the problems
here is not so much the Deep fake news
as the disinformation that is spread by
conspiracy theories who are creating
material they believe to be true what if
we could bring the conspiracy theorist
from the Shadows and back to the light
coming after the break we'll hear about
the AI chatbot that is deprogramming the
people who have disappeared down the
rabbit holes we'll be right back stay
with
us welcome back the moonlandings that
never happened the covid microchip that
was injected into your arm the pizza
pedophile ring in Washington conspiracy
theories abound often with dangerous
consequences many have tried reasoning
with the conspiracy theorists but to no
avail how do you talk to someone so
convinced of what they
believe who is equally suspicious of why
you would even be challenging those
beliefs well researchers have set about
creating a chatbot to do just that it
draws on a vast array of information to
converse with these people using bespoke
fact-based arguments and the debunk bot
as it's known is proving remarkably
successful joining us on Zoom is the
lead lead researcher Dr Thomas Costello
he's the associate professor in
Psychology at the University of
Washington you're very welcome to the
program tell us what the demystified
chatbot
does yeah sure thanks I'm happy to be
here so the the idea is that studying
conspiracy theorists and trying to
debunk them has been pretty hard until
now because there are so many different
conspiracy theories out there in the
world and you need all of all of this
like that you need to look across this
whole Corpus of information uh
comprehensively to debunk all of them
and study them in a systematic way and
large language models these AI tools are
perfect for doing for doing just that um
so so we ran an experiment where we had
people come in and uh describe a
conspiracy theory that they believed in
and felt strongly about uh the AI
summarized it for them and they rated it
and then they entered into a
conversation with this debunk bot uh
where so it was given exactly what they
believed and and programmed set up to to
persuade them away from The Conspiracy
Theory using facts and evidence what we
found at the end of this about 8 minute
conversation this back and forth was
that people uh conspiracy theorists
reduced their their beliefs in their
chosen conspiracy by about 20% on
average and actually one in four people
came out the under the other end of that
conversation actively uncertain towards
their conspiracy so they were newly
skeptical and and so is he the basis
that they don't know where to go to get
this information and they are suspicious
of anybody that might have the answers
to the things that concern them yeah I
mean that that could be part of it I I
think really it's just uh being provided
with facts and information that's
tailored to exactly what they and how do
you how do you deploy it because I don't
I don't I can't imagine that conspiracy
theorists are wandering around saying
disprove the conspiracy theory that I
believe to be true yeah no I mean that's
a great question I think it's one that
uh like I'd be curious to hear others
answers about too in the in the studies
we PID people to come and do it um that
said I'm optimistic about uh you know
the truth motivations of human beings in
general I think people want to know
what's true and so if there's a tool
that that they trust to do that then
then all the better yeah miles can you
see a purpose for this in
America yeah I mean I can certainly see
this principle being incorporated into a
lot of technology I mean a lot of us
already every day use things like chat
GPT and I'll actually give you an
example Christian of chat
GPT disproving something for me so
there's a famous uh Winston Churchill
quote a lie gets halfway around the
world before the truth can get its pants
on no quote better describes the
conversation we're having as how fast
this disinformation spreads well guess
what I put that into chat GPT before I
did a presentation on this subject and
said hold on a second that's actually
not a quote from Winston Church Hill
it's a quote from jonath Jonathan Swift
in the 1700s so AI helped me disprove
that misinformation that's been around
for years so yes I think this is
important and it should be integrated
into these Technologies and Christian is
this where the Two Worlds Collide
because presumably there are conspiracy
theorists who believe something so
fervently that they put out AI generated
material as well so if you can deal if
you can deal with the conspiracy theory
maybe you can stop the prevalence of of
fake material yeah potentially um I must
say though that um so this study was
done in the board atory condition so it
will be very interesting to see whether
these results also translate into the
real world um and then also the um large
language models that were used they were
safety fine-tuned um so that means you
know they were sort of programmed to say
the truth and so on um and so if that
safety fine-tuning is not there you know
um they could be used for something we
call Interactive disinformation so they
could be used to convince people of
things that are not true so that's the
big risk that see here and Thomas I've
got a question for you I'm curious just
about how much having good information
actually changes people's mind and the
example I would give is smoking we've
known for decades smoking is bad for you
everybody agrees we've got all the data
to back it up we put labels on it really
clearly and yet people still smoke and
when you talk to a smoker and try to
persuade them to give it up because you
care about them they will sometimes
really entrench in it's really hard to
break not just because it's addictive
but but because they maybe want to smoke
so I see this parallel perhaps with
conspiracy theories in terms of we have
beliefs and information is not always
enough to change it it's not just about
facts it's about something
else yeah yeah that's a great point I
mean I think that the case of smoking or
or drug other kinds of drug use um we we
know that it's bad for us when we start
doing it um they're fundamentally not
about information whereas whereas
beliefs and particularly conspiracy
beliefs are are often descriptive
they're accounts of what went on in the
world that uh you know Al-Qaeda didn't
uh put together the 911 terrorist
attacks it was the government and and so
dealing with claims about the world um
is something that I think is conducive
to informational persuasion in a way
that that maybe uh like uh nicotine use
is not yeah I mean M we we focus so much
on the legislating it's it's the the the
questions I always ask you how far
behind a congress on that what a state
house is doing about AI legis ation but
but what we've shown tonight is actually
that it's the industry
itself that is that is forcing the
change maybe it's not legislation
because legislation is always one step
behind well well Christian I'm going to
give you an embarrassing admission that
proves that point so I was at dinner
last night with one of the creators of
chat GPT gpt3 one of the earlier
versions she worked for Sam Alman we
were talking about the technology and I
complained to her I said you know I was
teaching a course at University of
Pennsylvania and I got lazy and I was
supposed to come up with a list of 25
books on a subject for my students I
said I'm going to look it up on GPT what
are the best 25 books produced it
emailed it out well guess what my
students emailed me and said all of
those books are fake gpt3 came up with a
bunch of fake books and I said this to
her and she said well yeah and that was
bad and it gave chat gbt a bad
reputation in your mind and that's why
we kept improving the models is we don't
want to serve you up false content
because you won't want to work with this
product and so that may not be
heartening to everyone but certainly
those industry improvements move a lot
faster than legislation because there's
a business imperative to get it right
yeah that indeed is the vested interest
that I see uh for a lot of the online
companies and and of course the AI
companies that are developing this stuff
uh we're out of time uh it flies by
doesn't it just to remind you that all
these episodes are on the AI decoded
playlist on YouTube some good ones on
there as well so have a look at those uh
thank you to Dr schro Dr Costello miles
and of course to Stephanie let's do it
again same time next week thanks for
watching
浏览更多相关视频
Deepfakes Explained: How they're made, how to spot them & are they dangerous? | Explained
Les dangers de l'intelligence artificielle : entrevue avec Yoshua Bengio
Detecting political deepfakes with AI | The Prompt x Microsoft's Chief Questions Officer Trevor Noah
Why it’s so easy to fall for fake news and how to spot it
Black And White: Deepfake Video पर PM Modi ने क्या कहा? | PM Modi Deepfake Video | Sudhir Chaudhary
Brussels my love? Get ready, get set for a major year of elections around the world
5.0 / 5 (0 votes)