How we can protect truth in the age of misinformation | Sinan Aral

TED
16 Jan 202015:04

Summary

TLDRThe video script discusses the alarming spread of fake news and its impact on society, exemplified by the 2013 Associated Press Twitter hack and its market consequences. It delves into the role of social media in disseminating misinformation, the influence of bots, and the potential dangers of synthetic media. The speaker suggests possible solutions, including labeling, incentives, regulation, transparency, and the use of algorithms, while emphasizing the importance of ethical considerations in combating misinformation.

Takeaways

  • 📰 The Associated Press was hacked by Syrian hackers who spread false news of an explosion at the White House, causing a stock market crash and highlighting the impact of misinformation on financial markets.
  • 🛑 Automated trading algorithms reacted to the false tweet, demonstrating how quickly misinformation can affect global economies.
  • 📈 The fake news incident wiped out $140 billion in equity value, showing the tangible financial consequences of misinformation.
  • 🔍 Robert Mueller indicted Russian entities for meddling in the 2016 U.S. election, revealing the extent of foreign influence campaigns through social media.
  • 🌐 The Internet Research Agency's activities reached millions on social media, underlining the scale at which misinformation can be disseminated.
  • 📊 A study by Oxford University found that a significant portion of information spread on social media during the Swedish elections was fake, indicating a widespread issue.
  • 📚 The largest-ever longitudinal study of fake news spread on Twitter showed that false news spreads more rapidly and widely than true news.
  • 🤔 The study found that false news is more likely to be retweeted, raising questions about why people are more inclined to share unverified information.
  • 🧐 The 'novelty hypothesis' suggests that people share novel information to gain status, which may explain the rapid spread of false news.
  • 🤖 The role of bots in spreading misinformation was confirmed, but they affect both true and false news equally, indicating that human behavior is a key factor in the spread of false information.
  • 🎭 The rise of synthetic media, powered by technologies like generative adversarial networks, poses a new challenge in distinguishing reality from fiction.
  • 🛠️ Addressing the spread of misinformation requires a multifaceted approach, including labeling, incentives, regulation, transparency, and the use of algorithms and machine learning.

Q & A

  • What was the impact of the fake tweet by Syrian hackers on the stock market in 2013?

    -The fake tweet about explosions at the White House and injury to President Barack Obama led to the stock market crashing, wiping out 140 billion dollars in equity value in a single day.

  • What was the role of the Internet Research Agency in the 2016 U.S. presidential election?

    -The Internet Research Agency, a shadowy arm of the Kremlin, spread misinformation on social media during the election, reaching 126 million people on Facebook, issuing three million tweets, and creating 43 hours of YouTube content to sow discord.

  • What did the study by Oxford University reveal about the spread of misinformation in the recent Swedish elections?

    -The study showed that one third of all information spreading on social media about the Swedish elections was fake or misinformation.

  • What was the main finding of the largest-ever longitudinal study of the spread of fake news online?

    -The study found that false news diffused further, faster, deeper, and more broadly than true news in every category, with false political news being the most viral.

  • Why was the hypothesis about people spreading false news having more followers or being more active on Twitter disproved?

    -The study revealed that false-news spreaders actually had fewer followers, were less active, and had been on Twitter for a shorter period of time compared to those spreading true news.

  • What is the 'novelty hypothesis' proposed to explain the spread of false news?

    -The 'novelty hypothesis' suggests that human attention is drawn to novelty, and people like to share novel information because it makes them seem like they have access to inside information, gaining status in the process.

  • How did sentiment analysis of replies to true and false tweets support the novelty hypothesis?

    -Sentiment analysis showed that false news exhibited significantly more surprise and disgust in replies, while true news had more anticipation, joy, and trust, corroborating the idea that new and surprising information is more likely to be shared.

  • What role did bots play in the spread of false news according to the study?

    -Bots were found to accelerate the spread of both false and true news online, but they were not responsible for the differential diffusion of truth and falsity, as humans are primarily responsible for that spread.

  • What are the two specific technologies mentioned that could make the spread of misinformation even worse?

    -The two technologies are 'generative adversarial networks' for creating convincing fake videos and audio, and the democratization of artificial intelligence, allowing anyone to deploy these algorithms to generate synthetic media.

  • What are some of the potential paths to address the problem of misinformation mentioned in the script?

    -The potential paths include labeling information, adjusting incentives to reduce the economic benefits of spreading false news, regulation of social media platforms, increasing transparency about how algorithms work, and developing algorithms and machine learning to identify and mitigate the spread of fake news.

  • What is the 'transparency paradox' that social media platforms like Facebook are facing?

    -The 'transparency paradox' refers to the simultaneous demand for social media platforms to be open and transparent about their algorithms and data while also ensuring the security and privacy of that data.

  • How does the speaker suggest we should approach the ethical and philosophical questions underlying technological solutions to misinformation?

    -The speaker suggests that while technology can help identify and mitigate the spread of misinformation, the ethical and philosophical questions about defining truth and falsity, and determining legitimate opinions and speech, require human involvement and cannot be solved by technology alone.

Outlines

00:00

📰 Impact of Fake News on Society and Economy

The paragraph discusses the rapid spread of a false tweet about an explosion at the White House in 2013, which was retweeted extensively and led to a significant stock market crash, resulting in a loss of 140 billion dollars in equity value. It highlights the role of automated trading algorithms responding to the sentiment of the tweet. The paragraph also touches on the indictments issued by Robert Mueller against Russian entities for their involvement in the 2016 U.S. presidential election through the Internet Research Agency's misinformation campaigns on social media. The study by Oxford University on the prevalence of fake news in the Swedish elections and the potential for social-media misinformation to incite violence is also mentioned. The speaker introduces their research on the spread of fake news online, comparing the diffusion of true and false news stories verified by fact-checking organizations, and noting that false news spreads more widely and rapidly than true news.

05:00

🧐 The Novelty Hypothesis and Emotional Response to Fake News

This paragraph delves into the reasons behind the viral nature of false news, proposing a 'novelty hypothesis' which suggests that humans are drawn to and share novel information due to its perceived value and the status it confers. The speaker's research measured the novelty of tweets and analyzed the sentiment in replies to both true and false news, finding that false news elicited more surprise and disgust, while true news received more anticipation, joy, and trust. The paragraph also addresses the role of bots in spreading misinformation, concluding that while bots do accelerate the spread of news, they affect both true and false news equally, indicating that the differential spread of truth and falsity is a human-driven phenomenon. The speaker foreshadows the worsening of the issue with the advent of synthetic media powered by technologies like generative adversarial networks, which can create convincing fake audio and video.

10:01

🛠 Possible Solutions to Combat Fake News

The final paragraph outlines potential strategies to address the challenges posed by fake news. It begins by discussing the idea of labeling information, similar to food labeling in grocery stores, to provide consumers with the source and credibility of the information. The paragraph then explores the concept of incentives, suggesting that reducing the spread of false news could diminish the economic motivation for its creation. Regulation is presented as another avenue, with the current exploration in the U.S. of how to regulate social media platforms like Facebook. The paragraph also discusses the importance of transparency in understanding how algorithms work and the challenges of balancing openness with data security. Finally, the paragraph considers the role of algorithms and machine learning in identifying and controlling the spread of fake news, emphasizing the need for human involvement in addressing ethical and philosophical questions about truth and legitimacy in the era of synthetic media.

Mindmap

Keywords

💡Fake News

Fake news refers to false information or propaganda published under the guise of being authentic news. In the video, it is central to the discussion as the script describes how a false tweet about an explosion at the White House and injuries to President Obama led to significant disruption in the stock market, illustrating the impact of misinformation on society and financial systems.

💡Viral

The term 'viral' in the context of information dissemination refers to content that spreads rapidly and widely, often through social media platforms. The video mentions how the fake news tweet went viral, highlighting the speed at which misinformation can propagate and the challenges in controlling its spread.

💡Automated Trading Algorithms

These are computer programs used in the stock market to execute trades based on predefined criteria or responses to certain triggers, such as news sentiment. The script explains how these algorithms reacted to the fake news about the White House, causing a significant drop in the stock market, demonstrating the real-world consequences of misinformation on financial systems.

💡Indictment

An indictment is a formal accusation or charge against someone, typically in a serious legal matter. In the video, the mention of indictments by Robert Mueller against Russian entities and individuals underscores the legal implications and international dimensions of spreading misinformation and interference in elections.

💡Internet Research Agency

The Internet Research Agency is portrayed in the video as a Russian organization involved in spreading misinformation on social media. It is highlighted as an example of state-sponsored efforts to influence public opinion and elections through fake news and social media manipulation.

💡Misinformation

Misinformation is false or inaccurate information that is spread, regardless of whether there is an intent to deceive. The video discusses the extensive reach of misinformation campaigns, such as those during the US and Swedish elections, emphasizing the pervasiveness of this issue in shaping public perception and opinion.

💡Longitudinal Study

A longitudinal study is a type of research that involves observing and collecting data from the same subjects over a long period. The video refers to the largest-ever longitudinal study of fake news on Twitter, which analyzed the spread of both true and false news stories from 2006 to 2017, providing insights into the dynamics of information dissemination.

💡Novelty Hypothesis

The novelty hypothesis posits that humans are more likely to share information that is new or surprising. The video uses this concept to explain why false news might spread more rapidly, suggesting that the novelty of such information makes it more attractive to share, thus contributing to its virality.

💡Sentiment Analysis

Sentiment analysis is the process of determining the emotional tone behind words, used here to understand reactions to true and false news. The video describes how false news elicited more surprise and disgust in replies, while true news garnered more anticipation, joy, and trust, reinforcing the idea that emotional responses influence the spread of information.

💡Bots

Bots, in the context of social media, are automated accounts that can post and share content. The video discusses the role of bots in amplifying the spread of both true and false news online, but clarifies that they do not account for the differential spread of truth and falsity, placing the responsibility on human behavior.

💡Synthetic Media

Synthetic media refers to artificially created content that can be convincingly real, such as deepfake videos or audio. The video warns of the impending rise of synthetic media, powered by technologies like generative adversarial networks, which pose a significant threat to the辨别ability of truth in the digital age.

💡Generative Adversarial Networks (GANs)

GANs are a class of machine learning models consisting of two parts: a generator that creates content and a discriminator that evaluates it. The video describes GANs as a technology that can produce highly convincing fake videos and audio, contributing to the problem of synthetic media and the blurring of reality.

💡Transparency

Transparency in the context of social media refers to the openness and clarity of how platforms operate, especially regarding their algorithms and data use. The video suggests that greater transparency could help address issues of misinformation, but also acknowledges the challenges of balancing openness with data security.

💡Ethics and Philosophy

Ethics and philosophy are discussed in the video as fundamental to addressing the problem of misinformation. They pertain to the moral principles and philosophical concepts that guide decisions about truth, falsity, and the legitimacy of speech and opinions, suggesting that technology alone cannot solve the issues raised by fake news.

Highlights

On April 23, 2013, a false tweet by Syrian hackers claiming explosions at the White House was retweeted 4,000 times in less than five minutes, causing a stock market crash and wiping out $140 billion in equity value.

Automated trading algorithms reacted to the fake tweet's sentiment, demonstrating the impact of misinformation on financial markets.

The indictment by Robert Mueller against Russian entities for election interference highlights the role of the Internet Research Agency in spreading misinformation during the 2016 US presidential election.

The Internet Research Agency's efforts reached 126 million people on Facebook, issued 3 million tweets, and created 43 hours of YouTube content with fake news designed to sow discord.

A study by Oxford University revealed that one third of information spread on social media about the recent Swedish elections was fake or misinformation.

Social-media misinformation campaigns can lead to 'genocidal propaganda,' as seen in the case of the Rohingya in Burma, triggering mob killings in India.

The largest-ever longitudinal study of fake news spread on Twitter from 2006 to 2017 was published in 'Science', analyzing the diffusion of verified true and false news stories.

False news was found to diffuse further, faster, deeper, and more broadly than true news in every category studied, with false political news being the most viral.

Contrary to initial hypotheses, false-news spreaders tended to have fewer followers, were less active, and had been on Twitter for a shorter time compared to those spreading true news.

False news was 70 percent more likely to be retweeted than true news, even when controlling for various factors.

The 'novelty hypothesis' suggests that human attention is drawn to novelty, and sharing novel information can increase social status, which may explain the spread of false news.

Sentiment analysis of replies to tweets showed that false news elicited more surprise and disgust, while true news received more anticipation, joy, and trust.

Bots were found to accelerate the spread of both false and true news online, but they did not account for the differential diffusion of truth and falsity.

The rise of synthetic media, powered by technologies like generative adversarial networks, poses a significant threat to the辨别ability of real and fake information.

The White House's use of a doctored video to justify revoking a reporter's press pass highlights the real-world implications of synthetic media.

Potential solutions to combat misinformation include labeling, incentives, regulation, transparency, and the use of algorithms and machine learning, each with its own challenges.

The ethical and philosophical questions underlying any technological solution involve defining truth and falsity and determining the legitimacy of opinions and speech.

The speaker calls for vigilance in defending the truth against misinformation, emphasizing the role of technology, policy, and individual responsibility.

Transcripts

play00:00

Transcriber: Ivana Korom Reviewer: Krystian Aparta

play00:13

So, on April 23 of 2013,

play00:18

the Associated Press put out the following tweet on Twitter.

play00:24

It said, "Breaking news:

play00:26

Two explosions at the White House

play00:29

and Barack Obama has been injured."

play00:32

This tweet was retweeted 4,000 times in less than five minutes,

play00:37

and it went viral thereafter.

play00:40

Now, this tweet wasn't real news put out by the Associated Press.

play00:45

In fact it was false news, or fake news,

play00:48

that was propagated by Syrian hackers

play00:51

that had infiltrated the Associated Press Twitter handle.

play00:56

Their purpose was to disrupt society, but they disrupted much more.

play01:00

Because automated trading algorithms

play01:02

immediately seized on the sentiment on this tweet,

play01:06

and began trading based on the potential

play01:09

that the president of the United States had been injured or killed

play01:12

in this explosion.

play01:14

And as they started tweeting,

play01:16

they immediately sent the stock market crashing,

play01:19

wiping out 140 billion dollars in equity value in a single day.

play01:25

Robert Mueller, special counsel prosecutor in the United States,

play01:29

issued indictments against three Russian companies

play01:33

and 13 Russian individuals

play01:36

on a conspiracy to defraud the United States

play01:39

by meddling in the 2016 presidential election.

play01:43

And what this indictment tells as a story

play01:47

is the story of the Internet Research Agency,

play01:50

the shadowy arm of the Kremlin on social media.

play01:54

During the presidential election alone,

play01:57

the Internet Agency's efforts

play01:59

reached 126 million people on Facebook in the United States,

play02:04

issued three million individual tweets

play02:08

and 43 hours' worth of YouTube content.

play02:11

All of which was fake --

play02:13

misinformation designed to sow discord in the US presidential election.

play02:20

A recent study by Oxford University

play02:23

showed that in the recent Swedish elections,

play02:26

one third of all of the information spreading on social media

play02:31

about the election

play02:32

was fake or misinformation.

play02:35

In addition, these types of social-media misinformation campaigns

play02:40

can spread what has been called "genocidal propaganda,"

play02:44

for instance against the Rohingya in Burma,

play02:47

triggering mob killings in India.

play02:49

We studied fake news

play02:51

and began studying it before it was a popular term.

play02:55

And we recently published the largest-ever longitudinal study

play03:00

of the spread of fake news online

play03:02

on the cover of "Science" in March of this year.

play03:06

We studied all of the verified true and false news stories

play03:10

that ever spread on Twitter,

play03:12

from its inception in 2006 to 2017.

play03:16

And when we studied this information,

play03:18

we studied verified news stories

play03:21

that were verified by six independent fact-checking organizations.

play03:25

So we knew which stories were true

play03:28

and which stories were false.

play03:30

We can measure their diffusion,

play03:32

the speed of their diffusion,

play03:34

the depth and breadth of their diffusion,

play03:36

how many people become entangled in this information cascade and so on.

play03:40

And what we did in this paper

play03:42

was we compared the spread of true news to the spread of false news.

play03:46

And here's what we found.

play03:48

We found that false news diffused further, faster, deeper

play03:52

and more broadly than the truth

play03:53

in every category of information that we studied,

play03:56

sometimes by an order of magnitude.

play03:59

And in fact, false political news was the most viral.

play04:03

It diffused further, faster, deeper and more broadly

play04:06

than any other type of false news.

play04:09

When we saw this,

play04:10

we were at once worried but also curious.

play04:13

Why?

play04:14

Why does false news travel so much further, faster, deeper

play04:18

and more broadly than the truth?

play04:20

The first hypothesis that we came up with was,

play04:23

"Well, maybe people who spread false news have more followers or follow more people,

play04:28

or tweet more often,

play04:29

or maybe they're more often 'verified' users of Twitter, with more credibility,

play04:33

or maybe they've been on Twitter longer."

play04:36

So we checked each one of these in turn.

play04:38

And what we found was exactly the opposite.

play04:41

False-news spreaders had fewer followers,

play04:44

followed fewer people, were less active,

play04:46

less often "verified"

play04:47

and had been on Twitter for a shorter period of time.

play04:50

And yet,

play04:52

false news was 70 percent more likely to be retweeted than the truth,

play04:57

controlling for all of these and many other factors.

play05:00

So we had to come up with other explanations.

play05:03

And we devised what we called a "novelty hypothesis."

play05:07

So if you read the literature,

play05:09

it is well known that human attention is drawn to novelty,

play05:12

things that are new in the environment.

play05:15

And if you read the sociology literature,

play05:17

you know that we like to share novel information.

play05:21

It makes us seem like we have access to inside information,

play05:25

and we gain in status by spreading this kind of information.

play05:29

So what we did was we measured the novelty of an incoming true or false tweet,

play05:36

compared to the corpus of what that individual had seen

play05:40

in the 60 days prior on Twitter.

play05:43

But that wasn't enough, because we thought to ourselves,

play05:46

"Well, maybe false news is more novel in an information-theoretic sense,

play05:50

but maybe people don't perceive it as more novel."

play05:53

So to understand people's perceptions of false news,

play05:57

we looked at the information and the sentiment

play06:01

contained in the replies to true and false tweets.

play06:06

And what we found

play06:07

was that across a bunch of different measures of sentiment --

play06:11

surprise, disgust, fear, sadness,

play06:14

anticipation, joy and trust --

play06:17

false news exhibited significantly more surprise and disgust

play06:23

in the replies to false tweets.

play06:26

And true news exhibited significantly more anticipation,

play06:30

joy and trust

play06:31

in reply to true tweets.

play06:34

The surprise corroborates our novelty hypothesis.

play06:38

This is new and surprising, and so we're more likely to share it.

play06:43

At the same time, there was congressional testimony

play06:46

in front of both houses of Congress in the United States,

play06:49

looking at the role of bots in the spread of misinformation.

play06:52

So we looked at this too --

play06:54

we used multiple sophisticated bot-detection algorithms

play06:57

to find the bots in our data and to pull them out.

play07:01

So we pulled them out, we put them back in

play07:04

and we compared what happens to our measurement.

play07:07

And what we found was that, yes indeed,

play07:09

bots were accelerating the spread of false news online,

play07:13

but they were accelerating the spread of true news

play07:15

at approximately the same rate.

play07:18

Which means bots are not responsible

play07:21

for the differential diffusion of truth and falsity online.

play07:25

We can't abdicate that responsibility,

play07:28

because we, humans, are responsible for that spread.

play07:34

Now, everything that I have told you so far,

play07:37

unfortunately for all of us,

play07:39

is the good news.

play07:42

The reason is because it's about to get a whole lot worse.

play07:47

And two specific technologies are going to make it worse.

play07:52

We are going to see the rise of a tremendous wave of synthetic media.

play07:57

Fake video, fake audio that is very convincing to the human eye.

play08:03

And this will powered by two technologies.

play08:06

The first of these is known as "generative adversarial networks."

play08:10

This is a machine-learning model with two networks:

play08:12

a discriminator,

play08:14

whose job it is to determine whether something is true or false,

play08:18

and a generator,

play08:19

whose job it is to generate synthetic media.

play08:22

So the synthetic generator generates synthetic video or audio,

play08:27

and the discriminator tries to tell, "Is this real or is this fake?"

play08:32

And in fact, it is the job of the generator

play08:35

to maximize the likelihood that it will fool the discriminator

play08:40

into thinking the synthetic video and audio that it is creating

play08:43

is actually true.

play08:45

Imagine a machine in a hyperloop,

play08:47

trying to get better and better at fooling us.

play08:51

This, combined with the second technology,

play08:53

which is essentially the democratization of artificial intelligence to the people,

play08:59

the ability for anyone,

play09:01

without any background in artificial intelligence

play09:04

or machine learning,

play09:05

to deploy these kinds of algorithms to generate synthetic media

play09:09

makes it ultimately so much easier to create videos.

play09:14

The White House issued a false, doctored video

play09:18

of a journalist interacting with an intern who was trying to take his microphone.

play09:23

They removed frames from this video

play09:25

in order to make his actions seem more punchy.

play09:29

And when videographers and stuntmen and women

play09:32

were interviewed about this type of technique,

play09:35

they said, "Yes, we use this in the movies all the time

play09:38

to make our punches and kicks look more choppy and more aggressive."

play09:44

They then put out this video

play09:46

and partly used it as justification

play09:48

to revoke Jim Acosta, the reporter's, press pass

play09:52

from the White House.

play09:54

And CNN had to sue to have that press pass reinstated.

play10:00

There are about five different paths that I can think of that we can follow

play10:06

to try and address some of these very difficult problems today.

play10:10

Each one of them has promise,

play10:12

but each one of them has its own challenges.

play10:15

The first one is labeling.

play10:17

Think about it this way:

play10:18

when you go to the grocery store to buy food to consume,

play10:22

it's extensively labeled.

play10:24

You know how many calories it has,

play10:26

how much fat it contains --

play10:28

and yet when we consume information, we have no labels whatsoever.

play10:32

What is contained in this information?

play10:34

Is the source credible?

play10:35

Where is this information gathered from?

play10:38

We have none of that information

play10:39

when we are consuming information.

play10:42

That is a potential avenue, but it comes with its challenges.

play10:45

For instance, who gets to decide, in society, what's true and what's false?

play10:52

Is it the governments?

play10:54

Is it Facebook?

play10:55

Is it an independent consortium of fact-checkers?

play10:59

And who's checking the fact-checkers?

play11:02

Another potential avenue is incentives.

play11:05

We know that during the US presidential election

play11:08

there was a wave of misinformation that came from Macedonia

play11:11

that didn't have any political motive

play11:14

but instead had an economic motive.

play11:16

And this economic motive existed,

play11:18

because false news travels so much farther, faster

play11:22

and more deeply than the truth,

play11:24

and you can earn advertising dollars as you garner eyeballs and attention

play11:29

with this type of information.

play11:31

But if we can depress the spread of this information,

play11:35

perhaps it would reduce the economic incentive

play11:38

to produce it at all in the first place.

play11:40

Third, we can think about regulation,

play11:43

and certainly, we should think about this option.

play11:45

In the United States, currently,

play11:47

we are exploring what might happen if Facebook and others are regulated.

play11:52

While we should consider things like regulating political speech,

play11:56

labeling the fact that it's political speech,

play11:58

making sure foreign actors can't fund political speech,

play12:02

it also has its own dangers.

play12:05

For instance, Malaysia just instituted a six-year prison sentence

play12:10

for anyone found spreading misinformation.

play12:13

And in authoritarian regimes,

play12:15

these kinds of policies can be used to suppress minority opinions

play12:20

and to continue to extend repression.

play12:24

The fourth possible option is transparency.

play12:28

We want to know how do Facebook's algorithms work.

play12:32

How does the data combine with the algorithms

play12:35

to produce the outcomes that we see?

play12:38

We want them to open the kimono

play12:40

and show us exactly the inner workings of how Facebook is working.

play12:44

And if we want to know social media's effect on society,

play12:47

we need scientists, researchers

play12:49

and others to have access to this kind of information.

play12:53

But at the same time,

play12:54

we are asking Facebook to lock everything down,

play12:58

to keep all of the data secure.

play13:00

So, Facebook and the other social media platforms

play13:03

are facing what I call a transparency paradox.

play13:07

We are asking them, at the same time,

play13:09

to be open and transparent and, simultaneously secure.

play13:14

This is a very difficult needle to thread,

play13:17

but they will need to thread this needle

play13:19

if we are to achieve the promise of social technologies

play13:23

while avoiding their peril.

play13:24

The final thing that we could think about is algorithms and machine learning.

play13:29

Technology devised to root out and understand fake news, how it spreads,

play13:34

and to try and dampen its flow.

play13:37

Humans have to be in the loop of this technology,

play13:40

because we can never escape

play13:43

that underlying any technological solution or approach

play13:47

is a fundamental ethical and philosophical question

play13:51

about how do we define truth and falsity,

play13:54

to whom do we give the power to define truth and falsity

play13:57

and which opinions are legitimate,

play14:00

which type of speech should be allowed and so on.

play14:03

Technology is not a solution for that.

play14:06

Ethics and philosophy is a solution for that.

play14:10

Nearly every theory of human decision making,

play14:14

human cooperation and human coordination

play14:17

has some sense of the truth at its core.

play14:21

But with the rise of fake news,

play14:23

the rise of fake video,

play14:24

the rise of fake audio,

play14:26

we are teetering on the brink of the end of reality,

play14:30

where we cannot tell what is real from what is fake.

play14:34

And that's potentially incredibly dangerous.

play14:38

We have to be vigilant in defending the truth

play14:42

against misinformation.

play14:44

With our technologies, with our policies

play14:48

and, perhaps most importantly,

play14:50

with our own individual responsibilities,

play14:53

decisions, behaviors and actions.

play14:57

Thank you very much.

play14:59

(Applause)

Rate This

5.0 / 5 (0 votes)

相关标签
Fake NewsMisinformationSocial MediaDigital ImpactTruth DecayInfodemicsElection InterferenceAlgorithmic BiasMedia LiteracyFact-CheckingSynthetic Media
您是否需要英文摘要?