How We Can Protect Truth in the Age of Misinformation
Summary
TLDRThe video script discusses the alarming spread of fake news and its impact on society, economies, and democracies. It recounts the 2013 AP hack that falsely reported an explosion at the White House, causing a stock market crash. It also details Russia's Internet Research Agency's misinformation campaigns during the 2016 US election. The script explores why false news spreads faster than the truth, driven by novelty and human psychology. It warns of the impending threat of synthetic media, powered by generative adversarial networks, which can create convincing fake videos and audio. The speaker calls for vigilance and reliance on trusted news sources to navigate the era of misinformation.
Takeaways
- 🚨 A false tweet by the Associated Press in 2013 about explosions at the White House led to a stock market crash, demonstrating the impact of misinformation on financial markets.
- 🌐 The Internet Research Agency, linked to the Kremlin, was involved in spreading misinformation on social media platforms to influence the 2016 US presidential election.
- 📊 A study by Oxford University revealed that a third of social media content about the Swedish elections was fake or misinformation.
- ⏰ Misinformation on social media can have severe consequences, such as delaying first responders in emergency situations and leading to loss of lives.
- 📉 The spread of fake news is more rapid and extensive than true news, with false political news being particularly viral.
- 🤖 Bots play a role in spreading misinformation, but they are not solely responsible for the differential diffusion of truth and falsity online.
- 🧠 Humans are more likely to share novel information, which often includes false news, as it makes them appear knowledgeable and increases their perceived status.
- 🎭 Synthetic media, powered by technologies like generative adversarial networks, is becoming increasingly convincing and poses a threat to the authenticity of visual and audio content.
- 🛡️ Addressing the spread of misinformation requires a multifaceted approach, including labeling, incentives, regulation, transparency, and the development of algorithms to detect fake news.
- 🌟 The speaker emphasizes the importance of defending the truth against misinformation through technology, policy, and individual responsibility.
Q & A
What was the impact of the false tweet by Syrian hackers on the Associated Press in 2013?
-The false tweet about explosions at the White House and injuries to President Barack Obama led to a stock market crash, wiping out $140 billion in equity value in a single day.
Who was indicted by Robert Mueller for meddling in the 2016 U.S. presidential election?
-Three Russian companies and 13 Russian individuals were indicted for a conspiracy to defraud the United States through social media manipulation.
What was the role of the Internet Research Agency during the 2016 U.S. presidential election?
-The Internet Research Agency, a shadowy arm of the Kremlin, created and propagated fake news and misinformation on social media platforms to sow discord during the election.
How did the spread of fake news affect the Swedish elections according to an Oxford University study?
-A third of all information spreading on social media about the Swedish elections was found to be fake or misinformation.
What are the potential consequences of misinformation during emergency situations like terrorist attacks or natural disasters?
-Misinformation can lead to lost minutes and lives during emergency responses, as it can misguide first responders about the location of terrorists or trapped individuals.
How did the study published in Science in March 2017 analyze the spread of true and false news on Twitter?
-The study analyzed verified true and false news stories on Twitter from 2006 to 2017, comparing their diffusion, speed, depth, and breadth of spread.
What was the 'novelty hypothesis' proposed to explain why false news spreads more widely on social media?
-The 'novelty hypothesis' suggests that humans are drawn to and share novel information because it makes them seem knowledgeable and increases their perceived status.
What role do bots play in the spread of misinformation according to the speaker's research?
-Bots accelerate the spread of both false and true news online, but they are not responsible for the differential diffusion of truth and falsity, as humans are the primary agents in spreading misinformation.
What are generative adversarial networks and how do they contribute to the rise of synthetic media?
-Generative adversarial networks are machine learning models that consist of a generator creating synthetic media and a discriminator that tries to determine the authenticity of the media. This technology is contributing to the creation of very convincing fake videos and audio.
What are the potential dangers of synthetic media as described in the script?
-Synthetic media can be used to create convincing fake videos and audio, potentially making it look like anyone is saying anything, which can be dangerous for trust in information and the truth.
What are the five potential paths to address the problem of misinformation as outlined in the script?
-The five potential paths are labeling, incentives, regulation, transparency, and algorithms and machine learning technology to detect and dampen the spread of fake news.
Why is it important for humans to be involved in the technology designed to combat fake news?
-Humans must be involved because any technological solution is underpinned by ethical and philosophical questions about truth and falsity, and who has the power to define them.
Outlines
🚨 The Impact of False News on Society and Economy
The paragraph discusses the significant impact of false news, exemplified by a 2013 tweet from the Associated Press' hacked Twitter account about explosions at the White House injuring President Obama. This false news quickly went viral, causing a panic reaction in financial markets that led to a loss of $140 billion in equity value. The narrative then shifts to the role of the Internet Research Agency in spreading misinformation during the 2016 U.S. presidential election, reaching millions through social media platforms. The paragraph concludes with examples of how misinformation can have severe consequences, including influencing elections and even leading to loss of lives in emergency situations.
📊 The Virality of False News vs. True News
This section delves into the study of the spread of false versus true news on Twitter, as published in the journal Science. It highlights how false news stories spread more rapidly and extensively than true ones, sometimes by an order of magnitude. The study controlled for various factors such as the number of followers, activity levels, and credibility of the users spreading the news. A key finding was that false news was 70% more likely to be retweeted than true news, despite being shared by users who were less influential and active on the platform. The paragraph introduces the 'novelty hypothesis,' suggesting that the novelty of information plays a significant role in its spread, as people are drawn to and share new and surprising information.
🤖 The Role of Bots in Spreading Misinformation
The paragraph explores the role of bots in the dissemination of false news. It clarifies that while bots do accelerate the spread of false news, they also do the same for true news, implying that bots are not the primary reason for the differential spread of truth and falsehood. The speaker emphasizes that the responsibility for the spread of misinformation lies with human users. The paragraph also foreshadows the increasing challenges posed by emerging technologies that can create highly convincing synthetic media, which can further exacerbate the problem of misinformation.
🎭 The Threat of Synthetic Media and Deepfakes
This section warns of the impending threat of synthetic media, including deepfakes—highly realistic fake videos and audio generated using machine learning models like generative adversarial networks (GANs). The speaker explains how these technologies can be used to create convincing fake content, making it difficult to distinguish between real and fake information. Examples are given of how such technology could be misused to create damaging fake statements by public figures. The paragraph underscores the need for vigilance and reliance on trusted news sources in the face of such challenges.
🛡 Combating Misinformation: Potential Solutions
The final paragraph discusses potential strategies to combat the spread of misinformation. It suggests labeling information with credibility indicators, adjusting economic incentives to discourage the spread of false news, implementing regulations that balance transparency with privacy, and demanding transparency from social media platforms about their algorithms. The speaker also mentions the need for algorithms and machine learning to help identify and mitigate the spread of fake news. However, it emphasizes that technology is not a panacea and that ethical and philosophical considerations are crucial in defining truth and managing the flow of information.
Mindmap
Keywords
💡Fake News
💡Misinformation
💡Viral
💡Internet Research Agency
💡Novelty Hypothesis
💡Information Cascade
💡Synthetic Media
💡Generative Adversarial Networks (GANs)
💡Economic Incentive
💡Transparency
💡Algorithms
Highlights
On April 23rd, 2013, a false tweet by Syrian hackers caused a stock market crash, demonstrating the impact of misinformation.
The Internet Research Agency, a Kremlin-linked organization, was involved in spreading misinformation during the 2016 U.S. presidential election.
Misinformation campaigns can have severe real-world consequences, including influencing elections and even inciting violence.
A study by Oxford University revealed that a third of social media information during the Swedish elections was fake.
False news spreads more rapidly and widely than true news, with political news being particularly viral.
Contrary to expectations, those who spread false news on Twitter tend to have fewer followers and are less active, not more.
The novelty hypothesis suggests that people are more likely to share surprising and new information, which often includes false news.
False news often elicits more surprise and disgust in reactions, indicating its novelty and impact on audiences.
Bots accelerate the spread of both true and false news, but they do not account for the differential diffusion of truth and falsity.
Generative adversarial networks and AI democratization are enabling the creation of convincing fake videos and audio.
The rise of synthetic media poses a threat to the ability to discern truth from falsehood, potentially leading to a 'post-truth' era.
Labeling information with credibility and source transparency could be a way to combat the spread of misinformation.
Economic incentives play a role in the spread of false news, and addressing this could reduce its prevalence.
Regulation of social media platforms could help, but it also carries risks of suppressing minority opinions in authoritarian regimes.
Transparency in algorithms is needed to understand their impact on society, but it conflicts with privacy and security concerns.
Algorithms and machine learning can help identify and limit the spread of fake news, but ethical considerations are crucial.
Defending the truth against misinformation requires vigilance, technological solutions, policy changes, and individual responsibility.
Transcripts
[Music]
on April 23rd of 2013 The Associated
Press put out the following tweet it
said breaking news two explosions at the
White House and Barack Obama has been
injured this tweet was retweeted more
than four thousand times in less than
five minutes and it went viral
immediately thereafter but this tweet
was not real news this was false news
that was propagated by Syrian hackers
that had infiltrated the AP Twitter
handle their purpose was to disrupt
society but they disrupted much more
because automated trading algorithms
immediately seized on the sentiment on
this tweet and began trading based on
the potential that the President of the
United States had been injured or killed
in this explosion and as they started
tweeting they immediately sent the stock
market crashing wiping out a hundred and
forty billion dollars in equity value in
a single day earlier this year robert
muller special counsel prosecutor in the
united states issued indictments against
three russian companies and 13 russian
individuals on a conspiracy to defraud
the united states by meddling in the
2016 presidential election and what this
indictment tells as a story is the story
of the internet research agency the
shadowy arm of the kremlin on social
media housed in this nondescript
building in st. petersburg with four
stories dedicated to the creation and
propagation of fake news and
misinformation on twitter facebook and
all other social media platforms during
the presidential election alone the
internet agency's efforts reached 126
million people on Facebook in the United
States
issued three million individual tweets
and forty three hours worth of YouTube
content all of which was fake
misinformation designed to sow discord
in the u.s. presidential election a
recent study by Oxford University showed
that in the recent Swedish elections a
full third one-third of all of the
information spreading on social media
about the election was fake or
misinformation and these types of
misinformation don't just affect
economies and democracies but when we
talk about first responders that are
responding to a terrorist attack or
responding to a natural disaster
misinformation spreading about where the
terrorists are or which building people
are trapped in can mean minutes lost and
therefore lives lost in addition these
types of social media misinformation
campaigns can spread what has been
called genocide or propaganda for
instance against the Rohingya in Burma
or recently on whatsapp triggering mob
killings in India if you see the quote
at the bottom of the page it says fake
news is blamed for influencing elections
in the West but in India it's killing
people
we studied fake news and began studying
it before it was a popular term and we
recently published the largest ever
longitudinal study of the spread of fake
news online on the cover of science in
March of this year we studied all of the
verified true and false news stories
that ever spread on Twitter from its
inception in 2006 to the present day
2017 in fact and when we studied this
information we studied verified news
stories that were verified by six
independent fact-checking organizations
so we knew which stories were true and
which stories were false this is an
example of the type of information we
have in red you see the rise and fall of
false news stories over time in green
new stories over time and in yellow an
insidious category that we called mixed
news which contained information that
was partially true and partially false
some of the most difficult to root out
and some of the most difficult for
people to discern this is a graph of the
false political news that was spreading
during this period and you see it rise
and fall you see spikes of false news
during the u.s. presidential elections
on Twitter and you see one massive spike
of mixed information during the
annexation of Crimea it begins and ends
during that one and a half month period
that Crimea was annexed I tell this
story of the Crimean annexation and the
role of misinformation and fake news in
my upcoming book the hype machine I will
save that story for you in the book this
is what these cascades look like these
are cascades of true and false
information on Twitter the larger red
cascade is a false news tweet the green
one a true news tweet and it begins with
a starburst pattern of retweets at the
beginning of the Cascade and then you
see these tendrils like jellyfish
emanating from the starbursts one person
retweeting another and retweeting
another and retweeting another and these
types of structures have mathematical
properties we can measure their
diffusion the speed of their diffusion
the depth and breadth of their diffusion
how many people become entangled in this
information cascade and so on and what
we did in this paper was we compared the
spread of true news to the spread of
false news and here's what we found we
found that false news diffused further
faster deeper and more broadly than the
truth in every category of information
that we studied sometimes by an order of
magnitude and in fact false political
news was the most viral it diffused
further faster deeper and more broadly
than any other type of false news when
we saw these results we were at once
worried but also curious why why does
false news travel so much further
faster deeper and more broadly than the
truth the first hypothesis that we came
up with was well maybe people who spread
false news have more followers or follow
more people or tweet more often or maybe
they're more often verified users of
Twitter with more credibility or maybe
they've been on Twitter longer so we
checked each one of these in turn and
what we found was exactly the opposite
false news spreaders had fewer followers
followed fewer people were less active
less often verified and had been on
Twitter for a shorter period of time
and yet false news was 70% more likely
to be retweeted than the truth
controlling for all of these and many
other factors so we had to come up with
other explanations and what we devised
what we called a novelty hypothesis so
if you read the literature it is well
known that human attention is drawn to
novelty things that are new in the
environment and if you read the
sociology literature you know that we
like to share novel information because
it makes us seem like we're in the know
it makes us seem like we have access to
inside information and we've gained in
status by spreading this kind of
information so what we did was we
measured the novelty of an incoming true
or false
tweet compared to the corpus of what
that individual had seen in the 60 days
prior on Twitter so we used information
theoretic measures of the information
content in these true or false tweets
compared to all of the information that
they had seen in the 60 days prior to
this incoming true or false tweet and we
measured information novelty across
three distinct measures and across all
of these measures false news was much
more novel than the truth but that
wasn't enough because we thought to
ourselves well maybe false news is more
novel in an information theoretic sense
but maybe people don't perceive it as
more novel so to understand people's
perceptions of false news we looked at
the information and the sentiment
contained in the report
lies to true and false tweets and what
we found was that across a bunch of
different measures of sentiment surprise
disgust fear sadness anticipation joy
and trust false news exhibit exhibited
significantly more surprise and disgust
in the replies to false tweets and true
news exhibited significantly more
anticipation joy and Trust in reply to
true tweets the surprise corroborates
our novelty hypothesis this is new and
surprising and so we're more likely to
share it at the same time there was
congressional testimony in front of both
houses of Congress in the United States
looking at the role of bots in the
spread of misinformation so we looked at
this too we used multiple sophisticated
bot detection algorithms to find the
bots in our data and to pull them out so
we pulled them out we put them back in
and we compared what happens to our
measurements when we remove the bots and
when we put them back in and what we
found was that yes indeed bots were
accelerating the spread of false news
online but they were accelerating the
spread of true news at approximately the
same rate which means bots are not
responsible for the differential
diffusion of truth and falsity online we
can't abdicate that responsibility
because we humans are responsible for
that spread now everything that I have
told you so far unfortunately for all of
us is the good news the reason is
because it's about to get a whole lot
worse and two specific technologies are
going to make it worse we are going to
see the rise of a tremendous wave of
synthetic media fake video fake audio
that is very convincing to the human eye
and this will be powered by two
technologies the first of these is known
as generative adversarial networks this
is a machine learning model with two now
works a discriminator whose job it is to
determine whether something is true or
false and a generator whose job it is to
generate synthetic media so the
synthetic generator generates synthetic
video or audio and the discriminator
tries to tell is this real or is this
fake and then the generator sees what
the discriminator does and optimizes a
function to generate more and more
convincing video and audio in fact it is
the job of the generator to maximize the
likelihood that it will fool the
discriminator into thinking the
synthetic video and audio that it is
creating is actually true imagine a
machine in a Hyperloop trying to get
better and better at fooling us this
combined with a second technology which
is essentially the democratization of
artificial intelligence to the people
the ability for anyone without any
background in artificial intelligence or
machine learning to deploy these kinds
of algorithms to generate synthetic meit
media makes it ultimately so much easier
to create videos like this one we're
entering an era in which our enemies can
make it look like anyone is saying
anything at any point in time even if
they would never say those things so for
instance they could have me say things
like I don't know kill monger was right
or ben Carson is in the sunken place or
how about that simply President Trump is
a total and complete now you see
I would never say these things at least
not in a public address but someone else
would someone like Jordan Peele this is
a dangerous time moving forward we need
to be more vigilant with what we trust
from the internet that's a time when we
need to rely on trusted news sources
it may sound basic but how we move
forward age of information is going to
be the difference between whether we
survive
or whether we become some kind of
fucked-up dystopia thank you I'm stay
woke pitches in fact just recently the
White House issued a false doctored
video of a journalist interacting with
an intern who was trying to take his
microphone
they removed frames from this video in
order to make his actions seem more
punchy and when videographers and stunt
men and women were interviewed about
this type of technique they said yes we
use this in the movies all the time to
make our punches and kicks look more
choppy and more aggressive they then put
out this video and partly used it as
justification to revoke Jim Acosta the
reporters Press Pass from the White
House and CNN had to sue to have that
press pass reinstated there are about
five different paths that I can think of
that we can follow to try and address
some of these very difficult problems
today each one of them has promised but
each one of them has its own challenges
the first one is labeling think about it
this way when you go to the grocery
store to buy food to consume it's
extensively labeled you know how many
calories it has how much fat it contains
how many trans fats it has whether it's
been produced in a facility that
produces wheat or peanuts if you have an
allergy and yet when we consume
information we have no labels whatsoever
what is contained in this information is
it true or false
does this source typically put out true
information or false information is the
source credible where is this
information gathered from how many
reporters worked on this story what is
the policy of this journal in terms of
running with a fact do they need to have
two independent sources or three we have
none of that information when we are
consuming information that is a
potential Avenue but it comes with its
challenges for instance who gets to
decide in society what's true and what's
false is it the government's is it
Facebook is it an independent consortium
of fact checkers and who's checking the
fact checkers another potential Avenue
is incentives we know that during the
u.s. presidential election there was a
wave of misinformation that came from
Macedonia that didn't have any political
motive but instead had an economic
motive and this economic motive existed
because false news travels so much
farther faster and more deeply than the
truth and you can earn advertising
dollars as you garner eyeballs and
attention with this type of information
but if we can depress the spread of this
information perhaps it would reduce the
economic incentive to produce it at all
in the first place third we can think
about regulation and certainly we should
think about this option in the United
States currently we are exploring what
might happen if Facebook and others are
regulated recently in Europe the GDP are
went into the effect at the end of May
instituting strict privacy policies and
uses of data and algorithmic
transparency requirements but this type
of regulation while we should consider
things like regulating political speech
labeling the fact that it's political
speech making sure foreign actors can't
fund political speech it also has its
own dangers for instance Malaysia just
instituted a six year prison sentence
for anyone found spreading
misinformation and in authoritarian
regimes these kinds of policies can be
used to suppress minority opinions and
to continue to extend repression the
fourth possible option is transparency
we want to know how do Facebook's
algorithms work how does the data
combined with the algorithms to produce
the outcomes that we see we want them to
open the kimono
and show us exactly the inner workings
of how Facebook is working and if we
want to know social medias effect on
society we need scientists researchers
and others to have access to this kind
of information but at the same time we
are asking Facebook to lock everything
down to keep all of the data secure to
not give data to third parties like
Cambridge University who then gave the
data to Cambridge analytical that
created that scandal so Facebook and the
other social media platforms are facing
what I call a transparency paradox we
are asking them at the same time to be
open and transparent and simultaneously
secure this is a very difficult needle
to thread but they will need to thread
this needle if we are to achieve the
promise of social technologies while
avoiding their peril the final thing
that we could think about is algorithms
and machine learning technology devised
to root out and understand fake news how
it spreads and to try and dampen its
flow humans have to be in the loop of
this technology because we can never
escape that underlying any technological
solution or approach is a fundamental
ethical and philosophical question about
how do we define truth and falsity to
whom do we give the power to define
truth and falsity and which opinions are
legitimate which type of speech should
be allowed and so on technology is not a
solution for that ethics and philosophy
is a solution for that
nearly every theory of human
decision-making human cooperation and
human coordination has some sense of the
truth at its core but with the rise of
fake news the rise of fake video the
rise of fake audio we are teetering on
the brink of the end of reality where we
cannot tell what is real from what is
fake and that's potentially incredibly
dangerous we have to be vigilant in
defending the truth against
misinformation
with our technologies with our policies
and perhaps most importantly with our
own individual responsibilities
decisions behaviors and actions thank
you very much
[Applause]
Weitere ähnliche Videos ansehen
How Disinformation Spreads (With ATOM ARAULLO)
How we can protect truth in the age of misinformation | Sinan Aral
How to seek truth in the era of fake news | Christiane Amanpour
¿Qué son las fake news? - Consejos para reconocerlas - Fake news para niños
Fake news pandemic: Bakit marami ang nabibiktima ng Fake News sa social media | Need To Know
Real News vs. Fake News
5.0 / 5 (0 votes)