Yuval Noah Harari | 21 Lessons for the 21st Century | Talks at Google

Talks at Google
11 Oct 201858:47

Summary

TLDRIsraeli historian Yuval Noah Harari joins a Google Talks session to discuss key themes from his bestselling books which explore human history and potential futures. He argues technology is increasingly able to understand and influence humans better than we understand ourselves. Harari stresses we must re-evaluate notions of free will and refocus economics around human welfare, not corporate profits. Though technology creates risks, proper regulations and compassion can lead to enormous benefits. Harari advocates individuals knowing themselves deeply to immunize against technological manipulation and determine what future we want to build.

Takeaways

  • 😊 Yuval believes predicting the future is impossible, but influencing it is possible
  • 📚 Throughout history shared fictions like religions and corporations have been immensely powerful forces enabling cooperation
  • 😮 AI and biotech revolutions could provide tremendous benefits but also pose serious dangers like corporations manipulating people at scale
  • 🧠 Getting to know yourself through meditation, art etc. will be more critical than ever before
  • 😠 The combination of biotech and AI could replace many human jobs and disrupt concepts humans cherish like free will
  • ❓ Spirituality dealing with open-ended questions is becoming practically relevant for those building technologies today
  • 📰 The economics of free exciting news in exchange for attention could be changed to charge for higher quality information
  • 💖 Compassion is critical for any successful society and should be baked into any human enhancements pursued
  • 🚗 Self-driving cars force us to make philosophical decisions about ethics that will have real-world consequences
  • ⚖️ We may need a universal basic income if automation permanently displaces many people from jobs but the real crisis could be developing countries that get left behind

Q & A

  • How can technology and biology combine to revolutionize AI, according to Harari?

    -Harari argues that AI will only become truly revolutionary when infotech merges with biotech. Understanding what's happening inside the human body and brain will allow AI systems to monitor, diagnose, and manipulate people far more effectively.

  • What does Harari mean when he says humans will have to constantly 'reinvent themselves' in the future?

    -As the pace of technological and societal change accelerates, people will need the psychological flexibility to adapt, learn new skills, and change careers multiple times throughout their lives.

  • How does Harari differentiate between 'religion' and 'spirituality'?

    -He defines religion as pre-packaged answers telling people what to believe, while spirituality involves open-ended questioning and quests to explore life's biggest mysteries.

  • What does Harari see as the root cause of the current fake news crisis?

    -He argues the business model of providing exciting news for free in exchange for people's attention has made truth irrelevant. Maximizing engagement overrides factual accuracy.

  • Why does Harari claim free will has always been a myth?

    -He argues science shows all processes are either deterministic, random, or a combination. So while humans make choices, we don't choose the underlying will directing our desires and decisions.

  • How might self-driving cars force engineers to grapple with philosophical questions?

    -Programming autonomous vehicles to handle ethical dilemmas like who to save in an accident means translating philosophical theories into practical algorithms.

  • What does Harari mean when he says corporations and nations are shared fictions?

    -He claims entities like companies and countries exist only in our collective imagination. But as shared stories embraced by billions, they wield enormous influence.

  • Why does Harari urge technologists to map multiple scenarios when predicting the future?

    -With complex technologies like AI, focusing on just one or two outcomes fails to account for the full range of possibilities, both good and bad.

  • How might disgust mechanisms in the brain be hacked to fuel prejudice and genocide?

    -Harari argues playing on people's hardwired revulsion reactions facilitates dehumanization of disfavored groups, enabling their persecution.

  • What single AI project would Harari task Google with pursuing?

    -He challenges Google to develop AI systems focused on deeply understanding individual users in order to protect them rather than exploit them.

Outlines

00:00

😊 Introducing the speakers and topics

Wilson White from Google welcomes professor Yuval Noah Harari to give a talk. Harari is an author and historian who has written several popular books. They will discuss themes from his books like free will, consciousness, intelligence, the future of humans, AI ethics, and pressing issues society faces.

05:01

😕 Technology threatens privacy and free will

Harari argues free will has always been an illusion according to science. But technology now threatens privacy of thoughts and feelings, giving external entities more access to our inner worlds than we have ourselves. The combination of infotech and biotech is beginning to hack humans on an unprecedented scale.

10:02

😢 Loss of human privacy and access to ourselves

Humans are losing privileged access to understanding ourselves better than external algorithms can, threatening notions of identity. Knowing our blood pressure by face scan or mood by eye movements erodes privacy. The ability for corporations and governments to know us better than we know ourselves makes assumptions of free will obsolete.

15:04

🤔 Balancing dystopian and utopian futures

Harari acknowledges the benefits of technologies like healthcare AI and self-driving vehicles even as he focuses on warning about dangers. He says technology itself is never deterministic; we can build heaven or hell. We must remember humans created tools to serve us, not become slaves to serve our tools.

20:04

😠 Shared fictions like religion can lead to harm

Harari distinguishes the god of mystery and awe from the petty lawgiver used to justify harming others. Many fictions like religion and money have enabled large-scale human cooperation even while being used for oppression. Remembering these tools were made to serve humans can prevent such inversion.

25:08

🤥 Rethinking what truth itself means

While suffering is real, fictions like nations only cause suffering indirectly through the human experience. With technology hacking human free will, perhaps new shared stories will develop. The proliferation of fake news pinpoints the need to reevaluate assumptions about objective truth and verification.

30:11

😌 Pursuing spirituality assists responsible tech

Unlike religion's answers, spirituality asks open and ethical questions that force scientists building future technologies to confront decisions about values. Understanding yourself aids acting compassionately toward others and finding inner peace.

35:16

📖 Revolutionizing the news market

To combat fake news, Harari suggests changing the ad-based model where excitement gets clicks but abandons truth and thoughtfulness. High-quality news costs money to produce; it should also cost money to consume responsibly. Google could assist that transition and change societal expectations.

40:17

😃 Overcoming inequality through global unity

With automation threatening economic security, ideas like universal basic income require global cooperation across borders. The most vulnerable stand to lose opportunities from the emerging bio/AI revolution; planning must focus on uplifting those likely to suffer most from disruption.

45:20

👁️ Using meditation for inner knowledge

Beyond intelligence, knowing yourself deeply through experiences like meditation provides clarity for life decisions and work insights. Passively observing your mind's constant generation of ego-based narratives allows mindfully identifying reality itself in the present.

50:23

❤️ Instilling societal compassion

Designing tech only for intelligence risks losing compassion; understanding yourself grounds compassion for others. Anger or hatred harms yourself immediately, while compassion brings self-peace. Remembering suffering unites human experience for positive change.

55:27

🤖 Building an empathetic AI protector

Given Google's resources and ethical duty, Harari playfully challenges it to build AI that understands individuals deeply in order to protect them rather than exploit them. Understanding authentic needs provides opportunities for technology guided by wisdom and care.

Mindmap

Keywords

💡AI

Artificial Intelligence. A key theme in the talk is the potential impact, both positive and negative, of advancing AI technology. Harari warns about AI and biotech merging to 'hack humans', predicting corporations and governments will have more insight into our minds than we do.

💡biotech

Biotechnology. Harari argues the combination of AI and advancements in biotech pose the greatest threat to humanity. Biometric sensors providing external access to what's happening inside our bodies could enable corporations and governments to manipulate human behavior.

💡free will

The concept of humans having independent agency over their choices and actions. Harari argues free will has always been an illusion, but new technologies threaten to conclusively disprove it by giving external entities privileged access to our inner worlds.

💡self-driving cars

Autonomous vehicles. Used as an example of a technology forcing developers to grapple with philosophical questions like the value of a human life. The ethical choices required have real-world consequences.

💡spirituality

Distinct from religion, spirituality refers to open-ended questioning and quests to understand existential concepts like the meaning of life. Harari argues these questions can no longer be ignored as they are integral to decisions about technological advancement.

💡fake news

False or misleading information presented as news. Harari traces the rise of fake news to the current model of free, exciting stories in exchange for user attention and data.

💡fiction

Harari argues nations, corporations, money and other collective belief systems enabling large-scale human cooperation are fictional stories rather than objective truths. He warns against losing sight these were created to serve human needs.

💡attention economy

The business model of internet companies providing free services in exchange for user attention and data. Harari implicates this in the proliferation of fake news and excitation-driven content.

💡meditation

A mental training practice Harari credits for developing his focus, clarity and ability to distinguish stories from reality. He advocates self-knowledge through meditation as essential for maintaining agency amidst technological disruption.

💡inequality

The video touches on rising inequality both within and between nations as a critical issue. Harari argues though automation may concentrate wealth in technology hubs like California, the impact on developing countries could be even more dire.

Highlights

The most important things to emphasize in education are things like emotional intelligence and mental stability.

We need to build identities like tents that you can fold and move elsewhere.

The privilege access now belongs to corporations like Google. They can have access to things happening ultimately inside my body and brain, which I don't know about.

Suffering is the most real thing in the world. If you want to know whether a story is about a real entity or a fictional entity, you should just ask, can this entity actually suffer?

Engineers must tackle spiritual questions. If you are building a self-driving car, by force, you have to deal with questions like free will.

Fake news is old news. We've had them throughout history, and sometimes in much worse form than what we see today.

The current incarnation of the fake news problem has a lot to do with the model of the news and information market.

The really big revolution, which is coming very quickly, will be when the AI revolution and machine learning and all that, the infotech revolution, meets and merges with the biotech revolution.

When the managerial/political/ethical ability lags behind technological ability, you end up with enormous disasters.

Spirituality is when you have some big question about life. Religion is when somebody comes and tells you, this is the answer.

The experiment is a story that we share. It's things that we humans have invented and created in order to serve certain needs and desires that we have.

Fake news is old news. We've had them throughout history, and sometimes in much worse form than what we see today.

The really big revolution, which is coming very quickly, will be when the AI revolution and machine learning and all that, the infotech revolution, meets and merges with the biotech revolution.

An AI system that gets to know me in order to protect me and not in order to sell me products or make me click on advertisements and so forth.

Transcripts

play00:00

[MUSIC PLAYING]

play00:12

WILSON WHITE: Good afternoon, everyone, especially

play00:14

for those of you who are here in California.

play00:17

My name is Wilson White, and I'm on the public policy

play00:20

and government relations team here in California.

play00:23

We have an exciting talk for you today as part of our Talks

play00:26

at Google series, as well as a series of conversations

play00:30

we're having around AI ethics and technology ethics more

play00:34

generally.

play00:35

So today, I'm honored to have Professor Yuval Noah

play00:39

Harari with us.

play00:42

Yuval is an Israeli historian and a professor at the Hebrew

play00:48

University of Jerusalem.

play00:50

He is a dynamic speaker, thinker, and now

play00:54

an international bestselling author.

play00:57

He's the author of three books.

play00:59

We're going to talk about each of those books today.

play01:02

The first book he published in 2014, "Sapien," which explored

play01:07

some of our history as humans.

play01:10

His second book in 2016 had an interesting take on our future

play01:16

as humans.

play01:16

It was "Homo Deus."

play01:18

And then recently published a new book,

play01:19

the "21 Lessons for the 21st Century,"

play01:23

which attempts to grapple with some of the issues,

play01:26

the pressing issues that we are facing today.

play01:30

So we'll talk about some of the themes in each of those books

play01:33

as we go through our conversation.

play01:35

But collectively, his writings explore very big concepts

play01:39

like free will and consciousness and intelligence.

play01:42

So we'll have a lot to explore with Yuval today.

play01:46

So with that, please join me in welcoming Professor Yuval

play01:49

to Google.

play01:49

[APPLAUSE]

play01:52

YUVAL NOAH HARARI: Hello.

play02:01

WILSON WHITE: Thank you, Professor, for joining us.

play02:04

Before getting started, I have to say

play02:06

that when the announcement went out

play02:09

across Google about this talk, I got several emails

play02:14

from many Googlers around the world who told me

play02:18

that they had either read or are currently reading

play02:22

one or multiple of your books.

play02:24

So if you are contemplating a fourth book,

play02:27

maybe on the afterlife, no spoilers

play02:31

during this conversation.

play02:33

I want to start with maybe some of the themes in both

play02:39

your current book, "21 Lessons," as well

play02:41

as "Homo Deus," because I'm the father of two young kids.

play02:45

I have two daughters, a five-year-old

play02:47

and a three-year-old.

play02:48

And the future that you paint in "Homo Deus" is interesting.

play02:54

So I'd like to ask you, what should I

play02:57

be teaching my daughters?

play02:59

YUVAL NOAH HARARI: That nobody knows

play03:01

how the world would look like in 2050,

play03:04

except that it will be very different from today.

play03:08

So the most important things to emphasize in education

play03:13

are things like emotional intelligence

play03:17

and mental stability, because the one thing

play03:20

that they will need for sure is the ability

play03:23

to reinvent themselves repeatedly

play03:25

throughout their lives.

play03:27

It's really first time in history

play03:29

that we don't really know what particular skills to teach

play03:34

young people, because we just don't

play03:35

know in what kind of world they will be living.

play03:39

But we do know they will have to reinvent themselves.

play03:43

And especially if you think about something like the job

play03:45

market, maybe the greatest problem they will face

play03:49

will be psychological.

play03:51

Because at least beyond a certain age,

play03:53

it's very, very difficult for people to reinvent themselves.

play03:58

So we kind of need to build identities.

play04:01

I mean, if previously, if traditionally people built

play04:05

identities like stone houses with very deep foundations,

play04:11

now it makes more sense to build identities like tents that you

play04:16

can fold and move elsewhere.

play04:18

Because we don't know where you will have to move,

play04:20

but you will have to move.

play04:21

WILSON WHITE: You will have to move.

play04:22

So I may have to go back to school now

play04:24

to learn these things so that I can teach the next generation

play04:27

of humans here.

play04:29

In "21 Lessons for the 21st Century,"

play04:32

you tackle several themes that even we at Google,

play04:38

as a company who are on the leading edge of technology

play04:41

and how technology is being deployed in society,

play04:45

we wrestle with some of the same issues.

play04:48

Tell me a bit about your thoughts

play04:51

on why democracy is in crisis.

play04:54

That's a theme in the current book,

play04:56

and I want to explore that a bit.

play04:57

Why you think liberal democracy as we knew

play05:00

it is currently in crisis.

play05:04

YUVAL NOAH HARARI: Well, the entire liberal democratic

play05:06

system is built on philosophical ideas we've inherited

play05:11

from the 18th century, especially the idea

play05:14

of free will, which underlies the basic models

play05:20

of the liberal world view like the voter knows best,

play05:23

the customer is always right, beauty

play05:26

is in the eye of the beholder, follow your heart,

play05:29

do what feels good.

play05:31

All these liberal models, which are

play05:33

the foundation of our political and economic system.

play05:37

They assume that the ultimate authority is the free choices

play05:41

of individuals.

play05:43

I mean, there are, of course, all kinds of limitations

play05:45

and boundary cases and so forth, but when

play05:47

push comes to shove, for instance,

play05:50

in the economic field, then corporations

play05:53

will tend to retreat behind this last line of defense

play05:58

that this is what the customers want.

play06:01

The customer is always right.

play06:02

If the customers want it, it can't be wrong.

play06:05

Who are you to tell the customers that they are wrong?

play06:08

Now of course, there are many exceptions,

play06:10

but this is the basics of the free market.

play06:13

This is the first and last thing you learn.

play06:15

The customer is always right.

play06:17

So the ultimate authority in the economic field

play06:21

is the desires of the customers.

play06:23

And this is really based on a philosophical and metaphysical

play06:28

view about free will, that the desires of the customer, they

play06:34

emanate, they represent the free will of human beings,

play06:38

which is the highest authority in the universe.

play06:41

And therefore, we must abide by them.

play06:43

And it's the same in the political field

play06:45

with the voter knows best.

play06:47

And this was OK for the last two or three centuries.

play06:53

Because even though free will was always a myth and not

play06:58

a scientific reality--

play06:59

I mean, science knows of only two kinds

play07:02

of processes in nature.

play07:04

It knows about deterministic processes

play07:07

and it knows about random processes.

play07:09

And their combination results in probabilistic processes.

play07:14

But randomness and probability, they are not freedom.

play07:19

They mean that I can't predict your actions

play07:23

with 100% accuracy, because there is randomness.

play07:26

But a random robot is not free.

play07:28

If you connect a robot, say, to uranium, a piece of uranium,

play07:33

and the decisions of the robot is determined

play07:35

by random processes of the disintegration of uranium

play07:39

atoms, so you will never be able to predict exactly

play07:42

what this robot will do.

play07:44

But this is not freedom.

play07:45

This is just randomness.

play07:48

Now this was always true from a scientific perspective.

play07:51

Humans, certainly they have a will.

play07:55

They make decisions.

play07:56

They make choices.

play07:57

But they are not free to choose the will.

play08:00

The choices are not independent.

play08:02

They depend on a million factors,

play08:05

genetic and hormonal and social and cultural and so forth,

play08:10

which we don't choose.

play08:12

Now up till now in history, the humans

play08:17

were so complicated that for a practical perspective,

play08:24

it still made sense to believe in free will,

play08:27

because nobody could understand you better

play08:30

than you understand yourself.

play08:31

You had this inner realm of desires and thoughts

play08:37

and feelings which you had privileged access

play08:41

to this inner realm.

play08:42

WILSON WHITE: Yeah, but that hasn't changed today, right?

play08:45

Like, that--

play08:45

YUVAL NOAH HARARI: It has changed.

play08:47

There is no longer--

play08:48

the privilege access now belongs to corporations like Google.

play08:53

They can have access to things happening ultimately

play08:58

inside my body and brain, which I don't know about.

play09:01

There is somebody out there-- and not just one.

play09:04

All kinds of corporations and governments that maybe not

play09:07

today, maybe in five years, 10 years, 20 years, they

play09:10

will have privileged access to what's happening inside me.

play09:15

More privileged than my access.

play09:18

They could understand what is happening in my brain

play09:21

better than I understand it, which means-- they will never

play09:24

be perfect.

play09:25

WILSON WHITE: Right.

play09:26

But you will, as a free person, like, you

play09:30

will have delegated that access or that ability

play09:35

to this corporation or this machine or this--

play09:38

YUVAL NOAH HARARI: No, you don't have to give them permission.

play09:41

I mean, in some countries maybe you have no choice at all.

play09:45

But even in a democracy like the United States,

play09:49

a lot of the information that enables an external entity

play09:53

to hack you, nobody asks you whether you

play09:57

want to give it away or not.

play09:58

Now at present, most of the data that

play10:01

is being collected on humans is still from the skin outwards.

play10:07

We haven't seen nothing yet.

play10:09

We are still just at the tip of this revolution,

play10:14

because at present, whether it's Google and Facebook and Amazon

play10:17

or whether it's the government or whatever, they all

play10:21

are trying to understand people mainly

play10:23

on the basis of what I search, what I buy, where I go,

play10:29

who I meet.

play10:30

It's all external.

play10:31

The really big revolution, which is coming very quickly,

play10:35

will be when the AI revolution and machine

play10:38

learning and all that, the infotech revolution,

play10:41

meets and merges with the biotech revolution

play10:44

and goes under the skin.

play10:46

Biometric sensors or even external devices.

play10:49

Now we are developing the ability, for example,

play10:53

to know the blood pressure of individuals

play10:58

just by looking at them.

play11:00

You don't need to put a sensor on a person.

play11:02

Just by looking at the face, you can

play11:04

tell, what is the blood pressure of that individual?

play11:07

And by analyzing tiny movements in the eyes, in the mouth,

play11:11

you can tell all kinds of things from the current mood

play11:16

of the person--

play11:16

are you angry, are you bored--

play11:19

to things like sexual orientation.

play11:22

So we are talking about a world in which humans

play11:25

are no longer a black box.

play11:28

Nobody really understands what happens inside, so we say, OK.

play11:31

Free will.

play11:32

No, the box is open.

play11:34

And it's open to others, certain others more

play11:38

than it is open to-- you don't understand what's

play11:41

happening in your brain, but some corporation

play11:43

or government or organization could understand that.

play11:46

WILSON WHITE: And that's a theme that you

play11:48

explore in "Homo Deus" pretty--

play11:50

YUVAL NOAH HARARI: They're both in "Homo Deus"

play11:52

and in "21 Lessons."

play11:54

This is like, maybe the most important thing to understand

play11:58

is that this is really happening.

play12:00

And at present, almost all the attention goes to the AI.

play12:04

Like, now I've been on a two-week tour of the US

play12:07

for the publication of the book.

play12:08

Everybody wants to speak about AI.

play12:11

Like, AI.

play12:12

Previous book, "Homo Deus" came out, nobody cared about AI.

play12:15

Two years later, it's everywhere.

play12:17

WILSON WHITE: It's the new hot thing.

play12:18

YUVAL NOAH HARARI: Yeah.

play12:19

And I try to emphasize, it's not AI.

play12:21

The really important thing is actually the other side.

play12:24

It's the biotech.

play12:25

It's the combination.

play12:27

It's only the combination-- it's only with the help of biology

play12:30

that AI becomes really revolutionary.

play12:34

Because just do a thought experiment.

play12:36

Let's say we had the best, the most developed AI in the world.

play12:40

But humans, we're not animals.

play12:44

We're not biochemical algorithms.

play12:46

But they were something like transcendent souls

play12:50

that make decisions through free will.

play12:53

In such a world, AI would not have mattered much,

play12:57

because AI in such a world could never have replaced teachers

play13:01

and lawyers and doctors.

play13:03

You could not even build self-driving cars

play13:06

in such a world.

play13:07

Because to put a self driving car on the road,

play13:10

you need biology, not just computers.

play13:13

You need to understand humans.

play13:15

For example, if somebody's approaching the road,

play13:18

the car needs to tell, is this an eight-year-old,

play13:22

an 18-year-old, or an 80-year-old,

play13:25

and needs to understand the different behaviors

play13:30

of a human child, a human teenager, and a human adult.

play13:35

And this is biology.

play13:36

And similarly, to have really effective self-driving taxis,

play13:40

you need the car to understand a lot of things

play13:44

about human psychology.

play13:45

The psychology of the passengers coming in, what they want,

play13:49

and so forth.

play13:50

So if you take the biotech out of the equation AI by itself

play13:55

won't really go very far.

play13:58

WILSON WHITE: So I want to push you there,

play14:00

because I think it's easy to arrive at a dystopian

play14:06

view of what that world would look

play14:07

like with the bio and AI and cognitive abilities of machines

play14:15

when they meet.

play14:16

Like, how that can end up, right?

play14:18

And we see that in Hollywood, and that dystopian view

play14:22

is well documented.

play14:24

But I want to explore with you, like,

play14:28

what are some of the benefits of that combination?

play14:31

And how can that lead to an alternative world view

play14:36

than what's explored more deeply in "Homo Deus?"

play14:38

YUVAL NOAH HARARI: Well, it should

play14:39

be emphasized that there are enormous benefits.

play14:41

Otherwise, there would be no temptation.

play14:44

If it was only bad, nobody would do it.

play14:46

Google won't research it.

play14:47

Nobody would invest in it.

play14:49

And it should also be emphasized that technology is never

play14:53

deterministic.

play14:54

You can build either paradise or hell with these technologies.

play15:00

They are not just--

play15:01

they don't have just one type of usage.

play15:03

And as a historian and as a social critic and maybe

play15:08

philosopher, I tend to focus more

play15:10

on the dangerous scenarios, simply

play15:12

because for obvious reasons, the entrepreneurs

play15:17

and the corporations and the scientists and engineers

play15:19

are developing these technologies.

play15:22

They naturally tend to focus on the positive scenarios,

play15:26

on all the good it can do.

play15:28

But yes, definitely technology, it

play15:30

can do a tremendous amount of good

play15:32

to humanity, to take the example of the self-driving cars.

play15:36

So at present, about 1.25 million people

play15:40

are killed each year in traffic accidents.

play15:43

More than 90% of these accidents are because of human errors.

play15:48

If we can replace humans with self-driving cars,

play15:52

it's not that we'll have no car accidents.

play15:54

That's impossible.

play15:56

But we'll probably save a million lives every year.

play16:00

So this is a tremendous thing.

play16:02

And similarly, the combination of being

play16:05

able to understand what's happening inside my body, this

play16:08

also implies that you can provide people with the best

play16:13

health care in history.

play16:15

You can, for example, diagnose diseases

play16:18

long before the person understands

play16:21

that there is something wrong.

play16:23

At present, the human mind or human awareness

play16:26

is still a very critical junction in health care.

play16:31

Like, if something happens inside my body

play16:35

and I don't know about it, I won't go to the doctor.

play16:38

So if something like, I don't know, cancer

play16:41

is now spreading in my liver and I still don't feel anything,

play16:46

I won't go to the doctor.

play16:47

I won't know about it.

play16:48

Only when I start feeling pain and nausea and all kinds

play16:51

of things I can't explain.

play16:53

So after some time, I go to the doctor.

play16:55

He does all kinds of tests.

play16:57

And finally, they discover, oh, something's wrong.

play17:01

And very often, by that time, it's

play17:06

very expensive and painful.

play17:08

Not necessarily too late, but expensive

play17:10

and painful to take care of it.

play17:12

If I could have an AI doctor monitoring my body

play17:16

24 hours a day with biometric sensors and so forth,

play17:21

it could discover this long before I feel anything

play17:27

at this stage when it's still very

play17:29

cheap and easy and painless to cure it.

play17:33

So this is wonderful.

play17:34

WILSON WHITE: But in that world, it's

play17:35

an AI doctor, and not a human doctor.

play17:37

And I think one of the potential outcomes

play17:42

that you warn about is AI or machines or that combination

play17:46

of bio and AI replacing us, replacing us as humans.

play17:51

And I'd like to think that one thing that makes us human

play17:57

is having meaning in life or having a purpose for living.

play18:02

That's kind of a unique thing that humans have.

play18:05

And I don't think it's something that we would readily

play18:08

want to give up, right?

play18:09

So as this technology is evolving

play18:12

and we're developing it, it's likely

play18:14

something that we'll bake in this need

play18:17

to have meaning and purpose in life.

play18:20

You talk about in "21 Lessons" this notion that God is dead,

play18:26

or is God back?

play18:27

And the role that religion may play

play18:32

in how we progress as humans.

play18:37

Is there a place for that notion of God

play18:39

or religion to capture and secure

play18:42

this notion of meaning in life or purpose in life?

play18:45

YUVAL NOAH HARARI: Well, it all depends on the definitions.

play18:48

I mean, there are many kinds of gods,

play18:51

and people understand very different things

play18:53

by the word religion.

play18:56

If you think about God, so usually people

play18:59

have very two extremely different gods in mind

play19:03

when they say the word God.

play19:05

One god is the cosmic mystery.

play19:08

We don't understand why there is something rather than nothing,

play19:13

why the Big Bang happened.

play19:14

What is human consciousness?

play19:16

There are many things we don't understand about the world.

play19:18

And some people choose to call these mysteries

play19:22

by the name of God.

play19:23

God is the reason there is something rather than nothing.

play19:27

God is behind human consciousness.

play19:30

But the most characteristic thing of that god

play19:34

is that we know absolutely nothing about him,

play19:38

her, it, they.

play19:40

There is nothing concrete.

play19:43

It's a mystery.

play19:44

And this is kind of the god we talk

play19:46

about when late at night in the desert we sit around a campfire

play19:51

and we think about the meaning of life.

play19:53

That's one kind of god.

play19:54

I have no problem at all with this god.

play19:56

I like it very much.

play19:56

[LAUGHTER]

play19:58

Then there is another god which is the petty lawgiver.

play20:03

The chief characteristic of this god,

play20:06

we know a lot of extremely concrete things about that god.

play20:11

We know what he thinks about female dress code, what kind

play20:15

of dresses he likes women to wear.

play20:19

We know what he thinks about sexuality.

play20:22

We know what he thinks about food, about politics,

play20:25

and we know these tiny little things.

play20:29

And this is a god people talk about when they stand around,

play20:35

burning a heretic.

play20:37

We'll burn you because you did something

play20:39

that this god-- we know everything about this god,

play20:41

and he doesn't like it that you do this, so we burn you.

play20:45

And it's like a magic trick that when

play20:49

you come and talk about God-- so how

play20:51

do you know that God exists, and so forth?

play20:53

People would say, well, the Big Bang and human consciousness,

play20:57

and science can't explain this, and science can't explain that.

play21:01

And this is true.

play21:03

And then like a magician swapping one card for another,

play21:08

they will, shh!

play21:09

Take out the mystery god and place the petty lawgiver,

play21:13

and you end up with something strange like,

play21:16

because we don't understand the Big Bang,

play21:19

women must dress with long sleeves

play21:21

and men shouldn't have sex together.

play21:24

And what's the connection?

play21:25

I mean, how did you get from here to there?

play21:28

So I prefer to use different terms here.

play21:33

And it's the same with religion.

play21:35

People understand very different things with this word.

play21:39

I tend to separate religions from spirituality.

play21:44

Spirituality is about questions.

play21:47

Religion is about answers.

play21:49

Spirituality is when you have some big question about life

play21:54

like, what is humanity?

play21:56

What is the good?

play21:59

Who am I?

play22:00

WILSON WHITE: Our purpose in life.

play22:01

Like, why are we here?

play22:02

YUVAL NOAH HARARI: What should I do in life?

play22:04

And this is kind of-- and you go on a quest,

play22:06

looking deeply into these questions.

play22:10

And you're willing to go after these questions

play22:13

wherever they take you.

play22:14

WILSON WHITE: You could just Google it.

play22:16

YUVAL NOAH HARARI: Yeah.

play22:17

Maybe in the future.

play22:18

But so far, at least some of these questions,

play22:20

I think when you type, like, what is the meaning of life,

play22:23

you get 42.

play22:24

Like, it is the number one result in Google search.

play22:29

So you go on a spiritual quest.

play22:32

And religion is the exact opposite.

play22:34

Religion is somebody comes and tells you, this is the answer.

play22:37

You must believe it.

play22:39

If you don't believe this answer,

play22:40

then you will burn in hell after you die,

play22:43

or we'll burn you here even before you die.

play22:45

[LAUGHTER]

play22:47

And it's really opposite things.

play22:49

Now I think that at the present moment in history,

play22:54

spirituality is probably more important

play22:56

than in any previous time in history,

play22:58

because we are now forced to confront spiritual questions,

play23:04

whether we like it or not.

play23:06

WILSON WHITE: And do you think that confrontation

play23:08

with those questions, that will inform how we allow technology

play23:13

to develop and be deployed?

play23:14

YUVAL NOAH HARARI: Exactly Now throughout history,

play23:17

you always had a small minority of people

play23:20

who was very interested in the big spiritual

play23:23

and philosophical questions of life,

play23:26

and most people just ignored them and went along with their,

play23:29

like, you know, fighting about who owns this land

play23:32

and who this goad herd, to whom it belongs, and so forth.

play23:36

Now we live in a very unique time in history

play23:38

when engineers must tackle spiritual questions.

play23:43

If you are building a self-driving car, by force,

play23:48

you have to deal with questions like free will.

play23:52

By force, you have to deal with the example everybody gives.

play23:55

The self-driving car.

play23:57

Suddenly two kids jump--

play23:59

running after a ball jump in front of the car.

play24:01

The only way to save the two kids is to swerve to the side

play24:05

and fall off a cliff and kill the owner of the car who

play24:08

is asleep in the backseat.

play24:09

What should the car do?

play24:12

Now philosophers have been arguing

play24:14

about these questions for thousands of years

play24:16

with very little impact on human life.

play24:22

But engineers, they are very impatient.

play24:26

If you want to put the self-driving car on the road

play24:29

tomorrow or next year, you need to tell

play24:33

the algorithm what to do.

play24:34

And the amazing thing about this question

play24:37

now is that whatever you decide, this will actually happen.

play24:42

Previously, with philosophical discussions, like you had,

play24:45

I don't know, Kant and Schopenhauer and Mill

play24:48

discussing this issue, should I kill the two kids

play24:51

or should I sacrifice my life?

play24:54

And even if they reach an agreement--

play24:57

and very little impact on actual behavior.

play25:00

Because even if you agree theoretically,

play25:03

this is the right thing to do, at a time of crisis,

play25:07

philosophy has little power.

play25:09

You react from your gut, not from

play25:12

your philosophical theories.

play25:14

But with a self-driving car, if you program the algorithm

play25:18

to kill the driver--

play25:20

and not the driver, the owner of the car, and not

play25:23

the two kids, you have a guarantee,

play25:25

a mathematical guarantee that this is

play25:27

exactly what the car will do.

play25:31

So you have to think far more carefully than ever before,

play25:34

what is the right answer?

play25:36

So in this sense, very old spiritual and philosophical

play25:41

questions are now practical questions of engineering,

play25:45

which you cannot escape if you want, for example,

play25:48

to put a self-driving car on the road.

play25:50

WILSON WHITE: I want to go back to this concept of religion

play25:53

versus spirituality and the role they play

play25:56

in "Sapiens," your first book.

play25:58

You talk about this concept of human fictions or stories

play26:03

that we create as humans, I guess to get us through life

play26:08

and to get us through our interactions with each other.

play26:12

Those fictions, those stories, as you put it,

play26:15

they've served us well.

play26:17

They've resulted in a lot of good for humankind,

play26:21

but have also been the source of wars and conflict

play26:26

and human suffering.

play26:28

How do you square that with this moment

play26:30

we're in where spirituality is an integral part in how

play26:34

we think about integrating technology in our lives?

play26:39

YUVAL NOAH HARARI: Phew.

play26:40

That's a big question.

play26:42

Well, so far in history, in order

play26:46

to organize humans on a large scale,

play26:49

you always had to have some story, some fiction which

play26:56

humans invented, but which enough humans believed in order

play27:01

to agree on how to behave.

play27:03

It's not just religion.

play27:04

This is the obvious example.

play27:08

And even religious people would agree

play27:10

that all religions except one are fictional stories.

play27:14

[LAUGH]

play27:15

Except for, of course, my religion.

play27:18

If you ask a Jew, then he will tell you, yes.

play27:20

Judaism is the truth.

play27:21

That's for sure.

play27:22

But all these billions of Christians

play27:24

and Muslims and Hindus, they believe in fictional stories.

play27:28

I mean, all this story about Jesus rising from the dead

play27:30

and being the Son of God, this is fake news.

play27:32

WILSON WHITE: Wait, that's not true?

play27:34

YUVAL NOAH HARARI: If you ask a Jew, like a rabbi.

play27:36

Even though rabbis tend to be, like-- to hedge their bets.

play27:38

[LAUGH]

play27:39

So maybe not.

play27:40

But then you go to the Christians.

play27:42

They will say, no, no, no, no, no no.

play27:44

This is true.

play27:45

But the Muslims, they believe in fake news.

play27:48

All this story about Muhammad meeting

play27:50

the archangel Gabriel and the Quran coming from Heaven,

play27:53

this is all fake news.

play27:54

And then the Muslims, they'll tell you this about Hinduism.

play27:56

So even in religion, it's very clear.

play27:59

The more interesting thing is that the same

play28:02

is true in something in the economy.

play28:06

Corporation, you can't have a modern economy

play28:09

without corporations like Google and without money,

play28:12

like dollars.

play28:13

But corporations and currencies, they

play28:16

are also just stories we invented.

play28:19

Google has no physical or biological reality.

play28:23

It is a story created by the powerful shamans

play28:28

we call lawyers.

play28:29

[LAUGHTER]

play28:31

Even if you ask a lawyer, what is Google,

play28:34

like, you push them to, what is it,

play28:36

they will tell you it's a legal fiction.

play28:40

It's not this chair.

play28:40

It belongs to Google, I think.

play28:42

But it's not it.

play28:45

It's not the money.

play28:46

It's the manager.

play28:47

It's not the workers.

play28:48

It's a story created by lawyers.

play28:51

And for example, I mean, if somehow

play28:53

with some natural calamity destroys--

play28:56

like, there is an earthquake and the Googleplex collapses,

play29:00

Google still exists.

play29:02

Even if many of the workers and managers are killed,

play29:04

it just hires new ones.

play29:05

[LAUGHTER]

play29:06

And it still has money in the bank.

play29:08

And even if there is no money in the bank, they can get a loan

play29:12

and build new buildings and hire new people,

play29:14

and everything is OK.

play29:16

But then if you have the most powerful shaman

play29:19

like the Supreme Court of the United States comes and says,

play29:23

I don't like your story.

play29:25

I think you need to be broken into different fictions.

play29:30

Then that's the end.

play29:31

WILSON WHITE: So-- so you--

play29:33

[LAUGHTER]

play29:35

That's a lot to unpack.

play29:36

[LAUGHTER]

play29:38

So the advent that we're in now with fake news

play29:41

and really seriously questioning what veracity means

play29:46

and how veracity impacts these kind of foundational things

play29:52

that you laid out earlier in your remarks that have allowed

play29:55

us to work with each other, work across borders, et cetera,

play30:02

with this, where you are on this notion of stories and fictions

play30:06

that we have, is this advent of fake news, is that a reality?

play30:11

Is that where we should be in terms of questioning what's

play30:15

true and what's not true?

play30:16

YUVAL NOAH HARARI: On the one hand, fake news is old news.

play30:19

We've had them throughout history,

play30:22

and sometimes in much worse form than what we see today.

play30:27

WILSON WHITE: But is there such thing as truth?

play30:29

YUVAL NOAH HARARI: Yes, there is absolutely.

play30:31

I mean, there is reality.

play30:33

I mean, you have all these stories

play30:34

people tell about reality.

play30:36

WILSON WHITE: I see.

play30:37

YUVAL NOAH HARARI: But ultimately, there is reality.

play30:40

The best test of reality that I know is the test of suffering.

play30:44

Suffering is the most real thing in the world.

play30:47

If you want to know whether a story is

play30:49

about a real entity or a fictional entity,

play30:53

you should just ask, can this entity actually suffer?

play30:56

Now Google cannot suffer.

play30:58

Even if the stock goes down, even if a judge comes and says,

play31:03

this is a monopoly, you have to break it up, it doesn't suffer.

play31:07

Humans can suffer like the managers,

play31:12

the owners of the stocks, the employees, they can suffer.

play31:16

WILSON WHITE: My girls.

play31:17

YUVAL NOAH HARARI: Yeah.

play31:18

They can certainly suffer.

play31:19

But we know, we can very easily that Google is just a story

play31:23

by this simple test that it cannot suffer.

play31:27

And it's the same of nations.

play31:28

It's the same of currencies.

play31:30

The dollar is just a fiction we created.

play31:32

The dollar doesn't suffer if it loses its value.

play31:35

WILSON WHITE: Let me push you on that, right?

play31:36

So oftentimes, like just in the US,

play31:40

they say kind of the system we set up in the US

play31:44

is an experiment.

play31:44

It's often styled as an experiment democracy

play31:48

with checks and balances, et cetera.

play31:51

Under one view of that, you can say that that's kind of a story

play31:55

that we've created in America, right?

play31:56

We've created this kind of really nice story.

play32:00

But if that was broken apart, like,

play32:02

that entity is not suffering.

play32:06

But if that experiment is the thing, the proper functioning

play32:12

of those institutions and the things

play32:13

that support that-- so that's the thing.

play32:15

YUVAL NOAH HARARI: We know that it functions properly

play32:17

because it alleviates suffering.

play32:20

It provides health care, it provides safety.

play32:24

And if it doesn't, then we would say

play32:26

the experiment doesn't work.

play32:28

The experiment--

play32:29

WILSON WHITE: So would you say that experiment is a fiction?

play32:33

Or is that experiment reality?

play32:35

Is it a thing?

play32:36

YUVAL NOAH HARARI: The experiment

play32:37

is a story that we share.

play32:39

It's things that we humans have invented and created

play32:43

in order to serve certain needs and desires that we have.

play32:50

It is a created story, and not an objective reality.

play32:56

But it is nevertheless one of the most powerful forces

play32:59

in the world.

play33:00

When I say that something is a fiction or a story,

play33:02

I don't mean to imply it's bad or that it's not important.

play33:06

No.

play33:07

Some of the best things in the world

play33:09

and the most powerful forces in the world

play33:12

are these shared fictions.

play33:15

Nations and corporations and banks and so forth,

play33:19

they are all stories we created, but they

play33:23

are the most powerful forces today in the world,

play33:26

far more powerful than any human being or any animal.

play33:31

And they can be a tremendous force for good.

play33:34

The key is to remember that we created them to serve us,

play33:39

and not that we are here in order to serve them.

play33:43

The trouble really begins when people

play33:46

lose sight of the simple reality that we are real, they are not.

play33:53

And a lot of people throughout history and also

play33:56

today, they kind of take it upside down.

play33:59

They think the nation is more real than me.

play34:03

I am here to serve it, and not it is here

play34:06

to serve me and my fellow humans.

play34:09

WILSON WHITE: Very interesting.

play34:10

So we're going to open it up for questions

play34:12

from the audience in a few minutes here,

play34:14

but I want to try to get an easy win.

play34:17

So in "21 Lessons," you tackle really big challenges

play34:22

and questions that we're wrestling with today.

play34:25

Of those questions, which do you think is the easiest to solve?

play34:30

And what should we be doing to go about solving them?

play34:33

YUVAL NOAH HARARI: Ooh.

play34:34

What is the easiest to solve?

play34:38

[EXHALE]

play34:38

[LAUGH]

play34:39

WILSON WHITE: Trying to get quick wins on the board here.

play34:41

YUVAL NOAH HARARI: Yeah.

play34:43

I'll address the fake news question,

play34:46

not because it's the easiest to solve, but also

play34:48

maybe because it's one of the most relevant to what

play34:51

you're doing here in Google.

play34:53

And I would say that the current incarnation of the fake news

play34:58

problem has a lot to do with the model of the news

play35:03

and information market, that we have constructed a model which

play35:10

basically says, exciting news for free

play35:16

in exchange for your attention.

play35:19

And this is a very problematic model,

play35:23

because it turns human attention into the most scarce resource,

play35:28

and you get more and more competition for human attention

play35:32

with more and more exciting news that-- again,

play35:35

and some of the smartest people in the world

play35:37

have learned how to excite our brain,

play35:40

how to make us click on the next news story.

play35:43

And truth gets completely pushed aside.

play35:46

It's not part of the equation.

play35:49

The equation is excitement, attention.

play35:51

Excitement, attention.

play35:52

And on the collective level, I think

play35:55

the solution to this problem would

play35:58

be to change the model of the news market

play36:02

to high-quality news that costs you a lot of money,

play36:07

but don't abuse your attention.

play36:10

It's very strange that we are in a situation when people

play36:14

are willing to pay a lot of money

play36:16

for high-quality food and high-quality cars,

play36:20

but not for high-quality news.

play36:23

And this has a lot to do with the architecture

play36:27

of the information market.

play36:29

And I think there are many things that you here in Google

play36:31

can do in order to help society change the model of the news

play36:36

market.

play36:37

WILSON WHITE: I'd want to continue to explore that,

play36:40

and whether that would create, like, an economic divide

play36:43

or exacerbate the current divide,

play36:45

but I'm going to open it up now for audience questions.

play36:48

We have a microphone here on the side.

play36:53

Start with you.

play36:53

AUDIENCE: Hi.

play36:54

Thank you so much for writing your books.

play36:55

They are completely wonderful, and I've

play36:57

had a joy reading them.

play36:59

So one of the things that you kind of explored here

play37:02

is we are facing a couple of global problems.

play37:07

And historically, we have never created global organizations

play37:13

which are responsible for solving global problems who had

play37:17

any ability to enforce them.

play37:19

And even when we've created them,

play37:20

they have come after great tragedies.

play37:24

So how can we sort of make that happen and make somebody

play37:28

responsible, and have the ability

play37:30

to have those organizations enforce those solutions?

play37:35

YUVAL NOAH HARARI: Yeah.

play37:36

I mean, it's not going to be easy.

play37:40

But I think the most important thing

play37:42

is to change the public conversation

play37:47

and focus it on the global problems.

play37:51

If people focus on local problems,

play37:55

they don't see the need for effective global cooperation.

play38:00

So the first step is to tell people again and again

play38:04

and again, look.

play38:05

The three biggest problems that everybody on the planet

play38:09

is now facing are nuclear war, climate change,

play38:13

and technological disruption.

play38:15

And even if we are able to prevent nuclear war and climate

play38:19

change, still AI and biotech are going

play38:23

to completely disrupt the job market and even the human body.

play38:28

And we need to figure out how to regulate this

play38:32

and how to prevent the dystopian consequences,

play38:35

and make sure that the more utopian

play38:37

consequences materialize.

play38:39

And for that, we need global cooperation.

play38:41

So it would be obvious to everybody,

play38:44

you cannot prevent climate change on a national level,

play38:49

and you cannot regulate AI on a national level.

play38:53

Whatever regulation the US adopts,

play38:55

if the Chinese are not adopting it, it won't do much help.

play38:59

So you need cooperation here.

play39:03

And then it goes into practical political issues.

play39:06

I mean, you have elections coming up,

play39:07

mid-term elections in the US.

play39:09

So if you go to a town meeting with an inspiring congressman

play39:14

or congresswoman, so you just ask them, if I elect you,

play39:19

what will you do about the danger of climate change,

play39:22

about the danger of nuclear war, and about getting

play39:25

global regulations for AI and for biotech?

play39:28

What's your plan?

play39:30

And if they say, oh, I haven't thought about it,

play39:33

then maybe don't vote for that person.

play39:36

[LAUGHTER]

play39:39

WILSON WHITE: Question.

play39:41

AUDIENCE: Hi, Yuval.

play39:43

Thanks for coming here today.

play39:44

So in one of your talks, you suggested

play39:47

that to avoid getting our hearts hacked,

play39:50

we need to stay ahead by knowing ourselves better.

play39:54

And it seems to me that the process of knowing yourself

play39:56

needs a lot of intelligence.

play39:58

And in some ways, it's a skill that needs to developed.

play40:01

I mean, the intellect that we have as humans

play40:04

seems fairly new when compared to other properties

play40:07

like we got evolutionarily.

play40:10

So how do you suggest that we can

play40:12

learn to think and use our intelligence better, and also

play40:15

do that at a scale?

play40:17

Because if only some people know themselves

play40:20

but millions around you or billions or on the

play40:22

don't, then you can only go so far.

play40:25

YUVAL NOAH HARARI: No, I don't think that knowing yourself

play40:27

is necessarily all about intelligence.

play40:31

Certainly not in the narrow sense of intelligence.

play40:34

If you include emotional intelligence and so forth,

play40:37

then yes.

play40:38

But in the more narrow sense of IQ, I think this is not--

play40:44

there are many very intelligent people

play40:46

in the world who don't know themselves

play40:48

at all, which is an extremely dangerous combination.

play40:53

Now some people explore themselves through therapy.

play40:59

Some use meditation.

play41:00

Some use art.

play41:01

Some use poems.

play41:02

They go on a long hike, go for a month to the Appalachian Trail

play41:08

and get to know themselves on the way.

play41:10

There are many ways to do it, which are not necessarily

play41:13

about intellect.

play41:15

It's not like reading articles about brain science.

play41:20

That's going to help in some ways.

play41:22

And in this sense, I think it's a very kind

play41:24

of democratizing ability or force to get to know yourself.

play41:32

After all, you-- you're always with yourself.

play41:35

It's not like you need some special observatory and to get

play41:38

some very rare machines from, I don't know,

play41:42

that cost millions of dollars.

play41:43

You just need yourself.

play41:45

AUDIENCE: Sure.

play41:46

But what about the art of thinking?

play41:48

YUVAL NOAH HARARI: What about?

play41:50

AUDIENCE: The art of thinking.

play41:52

YUVAL NOAH HARARI: The art of thinking.

play41:54

AUDIENCE: I mean, people are very intelligent,

play41:56

but they don't really use their intelligence

play41:58

to understand themselves [INAUDIBLE]..

play42:00

YUVAL NOAH HARARI: Yeah.

play42:01

Again, there is no easy way to do it.

play42:04

If it was easy to get to know yourself better,

play42:07

everybody would do it long ago, and we

play42:09

would be living in a very, very different world.

play42:13

WILSON WHITE: We have folks joining us

play42:14

from around the world as well, so I have a question

play42:17

from the question bank.

play42:20

Compassion is the critical underpinning

play42:22

of any successful society, yet I believe

play42:24

that technology is reducing our capacity for empathy.

play42:28

It feels that we no longer value compassion, perhaps even seeing

play42:31

compassion as weak.

play42:33

What are, in your view, effective ways

play42:35

to motivate members of society to develop their compassion?

play42:38

YUVAL NOAH HARARI: No, I don't think

play42:40

that technology is inherently undermining compassion.

play42:44

It can go both ways.

play42:47

Certainly, communication technology

play42:51

can make you aware of the plight of people

play42:54

on the other side of the world.

play42:56

And without that, you may be extremely

play42:58

compassionate about your immediate, like, family members

play43:02

and neighbors, and won't care at all about people

play43:05

on the other side of the world.

play43:07

So I don't think there is an inherent contradiction

play43:10

or collision between technology and compassion.

play43:15

But it is true that the way we design technology

play43:21

can make us less compassionate, and even the way

play43:26

that we design ourselves.

play43:29

For most of history, you had economic and political systems

play43:33

trying to shape people.

play43:35

And in the past, they did it with education

play43:38

and with culture.

play43:41

And in the present and future, we

play43:43

are likely to do it more and more

play43:44

with biotech and with brain computer interfaces.

play43:49

So our ability to manipulate ourselves is growing.

play43:54

And therefore, it's extremely important

play43:56

to remember to take compassion into account.

play44:01

Otherwise, the danger is that armies and corporations

play44:05

and government in many cases, they

play44:07

want something like intelligence.

play44:09

They want more intelligent workers and soldiers.

play44:13

They want more decisive workers.

play44:15

And sort of, don't take a whole day to decide.

play44:17

I want you to decide this in half an hour.

play44:20

And as our ability to manipulate humans--

play44:23

and I mean manipulate--

play44:25

re-engineer the body and the brain as it grows--

play44:29

we might engineer more decisive and intelligent humans

play44:35

at the price of compassion.

play44:38

Which many corporations and armies and governments

play44:42

find either irrelevant or even problematic,

play44:46

because it causes people to be hesitant

play44:48

and to take more time about the decisions,

play44:50

and so on and so forth.

play44:52

So we need to remember the enormous importance

play44:56

of compassion.

play44:58

And again, it goes back also to the question

play45:00

about getting to know yourself, which

play45:04

I think is the key to developing compassion.

play45:07

Not just because when you understand your own,

play45:10

that this makes me miserable, then you understand, oh.

play45:13

The same thing may make other people also miserable.

play45:16

It's even much deeper than that.

play45:19

When you really get to know yourself,

play45:21

you realize that when you ignore others

play45:26

and when you mistreat others, very often, it harms you

play45:31

even before it harms them.

play45:33

It's a very unpleasant experience to be angry.

play45:38

So your anger may harm other people, or maybe not.

play45:42

Maybe you're boiling with anger about somebody,

play45:44

and you don't do anything about it because she's your boss.

play45:48

But you don't harm her, but your anger harms you.

play45:53

So the more you understand yourself, the greater incentive

play45:58

you have to do something about my anger, about my hatred,

play46:03

about my fear.

play46:04

And most people discover that as they develop more compassion

play46:08

towards others, they also experience far more peace

play46:12

within themselves.

play46:14

WILSON WHITE: Wow.

play46:15

Another live question.

play46:18

AUDIENCE: Thank you.

play46:18

After reading your books, it occurs to me

play46:20

that you've most likely educated yourself both broadly

play46:23

and deeply to be the foundation for your ideas.

play46:26

For those of us that are interested in cultivating

play46:28

our mind similarly, wondering if you could share

play46:30

a little bit about your reading habits

play46:31

and how you choose what to consume.

play46:33

YUVAL NOAH HARARI: My reading habits.

play46:35

I read very eclectically.

play46:37

Like, no book is barred from entering the book list.

play46:45

But then I tend to be extremely impatient about the books I

play46:49

actually read.

play46:50

I would begin, like, 10 books and drop nine of them

play46:53

after 10 pages.

play46:56

It's not always the wisest policy,

play46:59

but it's my policy that if a book didn't really teach me

play47:05

something new, had some interesting insight

play47:08

in the first 10 pages, the chances it will--

play47:11

it could be that on page 100 there

play47:14

will be some mind-blowing idea that I'm now missing.

play47:18

But there are so many--

play47:20

I keep thinking, there are so many books,

play47:23

wonderful books out there that I will never read,

play47:27

so why waste time on the less optimal book?

play47:33

So I will try, like, a book on biology and then economics

play47:36

and then psychology and then fiction and whatever,

play47:39

and just go through them quite quickly until I find

play47:42

something that really grabs me.

play47:46

WILSON WHITE: Another live question.

play47:47

AUDIENCE: Hi, Mr. Harari.

play47:49

Thanks for being here.

play47:50

Fascinating talk as always.

play47:53

I do a little bit of meditation myself,

play47:55

and I've heard that you do a lot of meditation

play47:57

on the order of hours a day.

play47:58

Is that right?

play48:00

YUVAL NOAH HARARI: I try to do two hours every day,

play48:03

and I try to go every year to a long retreat of 45 or 60 days.

play48:08

AUDIENCE: So I was wondering, how

play48:09

do you feel that has influenced your life and the ideas

play48:12

that you have?

play48:14

YUVAL NOAH HARARI: Oh, it's had a tremendous influence,

play48:18

I think both on my inner peace of mind,

play48:21

but also on my work as a scientist.

play48:24

And maybe the two most important influences

play48:27

is that first it enabled me to have

play48:30

more clarity and more focus.

play48:32

And certainly when you write about such big subjects

play48:36

like trying to summarize the whole of history in 400 pages.

play48:40

So having a very, very focused mind

play48:43

is very important, because the great difficulty

play48:47

is that everything kind of distracts you.

play48:50

You start writing about the Roman Empire

play48:53

and you say, well, I have to explain

play48:54

this and this and this and this, and you end up

play48:57

with 4,000 pages.

play48:59

So we have to be very-- what is really important,

play49:01

and what can be left outside?

play49:03

And the other thing is that at least the meditation

play49:07

that I practice, which is with passive meditation,

play49:10

it's all about really knowing the difference

play49:13

between the fictions and stories generated

play49:16

by our mind and the reality.

play49:18

What is really happening right now?

play49:22

And when I meditate, the thing that happens

play49:26

is that constantly, the mind is like a factory that constantly

play49:31

generates stories about myself, about other people,

play49:36

about the world.

play49:38

And they are very attractive.

play49:39

Like, I get identified with them.

play49:42

And the meditation is constantly, don't.

play49:45

It's just a story.

play49:46

Leave it.

play49:47

Just try to stay with what is really happening right now.

play49:53

And this is the central practice in meditation.

play49:56

It's also a guiding principle when I study history

play50:01

or when I study what's happening in the world.

play50:04

AUDIENCE: Great.

play50:04

Thank you.

play50:05

WILSON WHITE: Let's take another question from the Dory.

play50:08

With inequality rising across most nations

play50:10

in the last few decades, what is your perspective

play50:13

on how we can use technological growth to solve this problem

play50:17

and create a more equitable world?

play50:19

Do we need a different economic paradigm to achieve this?

play50:23

YUVAL NOAH HARARI: Yes, we probably

play50:24

need a different economic paradigm, because we

play50:27

are entering kind of uncharted waters,

play50:30

especially because of the automation revolution

play50:33

and the growing likelihood that more and more people might

play50:37

be completely pushed out of the job market,

play50:40

not just because there won't be enough jobs,

play50:43

but simply because the pace of change in the job market

play50:48

will accelerate.

play50:49

So even if there are enough jobs,

play50:52

people don't have the psychological balance

play50:55

and stamina to constantly retrain, reskill, or reinvent

play50:59

themselves.

play51:02

And so I think the biggest problem in the job market

play51:04

is really going to be the psychological problem.

play51:08

And then what do you do when more and more people are

play51:11

left out?

play51:12

And there are explorations of new models

play51:15

like universal basic income and so forth, which

play51:19

are worth exploring.

play51:20

I don't have the answers.

play51:22

I will just say that anybody who thinks

play51:24

in terms like universal basic income

play51:27

should take the word universal very, very seriously,

play51:31

and not settle for national basic income.

play51:36

Because the greatest inequality we

play51:39

are facing will probably be inequality between countries,

play51:44

and not within countries.

play51:46

Some countries are likely to become extremely wealthy

play51:50

due to the automation revolution,

play51:53

and California is certainly one of these places.

play51:57

Other countries might lose everything,

play52:00

because their entire economy depends

play52:03

on things like money or labor, which will lose its importance,

play52:08

and they just don't have the resources

play52:10

and the educational system to kind of turn themselves

play52:14

into high-tech hubs.

play52:15

So the really crucial question is not,

play52:20

what do we do about, I don't know,

play52:23

Americans in Indiana who lose their jobs?

play52:26

The really important question is,

play52:28

what do we do about people in Guatemala or Bangladesh

play52:32

who lose their jobs?

play52:34

This should be, I think, the focus

play52:36

of this question of inequality.

play52:39

WILSON WHITE: OK.

play52:40

We'll take another live question.

play52:43

AUDIENCE: Hello, Mr. Harari.

play52:44

Thank you for doing this Q&A. So at Google,

play52:46

we have a responsibility to build products and services

play52:49

which not only achieve results for our shareholders,

play52:51

but also that actually benefit our end users.

play52:53

So in order to spend less time hacking humans

play52:57

and spend more time reducing suffering,

play52:59

we need to understand what type of future we want to build.

play53:01

So what I wanted to ask you is, what

play53:03

are your personal methodologies for making predictions

play53:05

about the future?

play53:06

And what suggestions would you give

play53:07

to Googlers who want to have a more versed understanding

play53:09

of the future?

play53:10

YUVAL NOAH HARARI: As I said in the very beginning,

play53:13

I don't think we can predict the future,

play53:14

but I think we can influence it.

play53:17

What I try to do as a historian-- and even

play53:21

when I talk about the future, I define myself as a historian,

play53:25

because I think that history is not the study of the past.

play53:29

History is the study of change, how

play53:32

human societies and political systems and economies change.

play53:36

And what I try to do is to map different possibilities

play53:41

rather than make predictions.

play53:43

This is what will happen in 2050.

play53:45

And we need to keep a very broad perspective.

play53:50

One of the biggest dangers is when

play53:52

we have a very narrow perspective,

play53:54

like we develop a new technology and we think,

play53:57

oh, this technology will have this outcome.

play54:01

And we are convinced of this prediction,

play54:04

and we don't take into account that the same technology might

play54:08

have very different outcomes.

play54:12

And then we don't prepare.

play54:15

And again, as I said in the beginning,

play54:17

it's especially important to take into account

play54:20

the worst possible outcomes in order to be aware of them.

play54:26

So I would say whenever you are thinking

play54:28

about the future, the future impact of a technology

play54:32

and developing, create a map of different possibilities.

play54:36

If you see just one possibility, you're not looking wide enough.

play54:40

If you see two or three, it's probably also not wide enough.

play54:43

You need a map of, like, four or five different possibilities,

play54:47

minimum.

play54:49

WILSON WHITE: Let's take another live question.

play54:53

AUDIENCE: Hey, Mr. Harari.

play54:55

So my question is--

play54:57

I'll start very broad, and then I'll

play54:59

narrow it down for the focus.

play55:01

I'm really interested in, what do

play55:03

you think are the components that

play55:04

make these fictional stories so powerful in how

play55:08

they guide human nature?

play55:11

And then if I narrow it down is, I'm

play55:13

specifically interested in the self-destruction behavior

play55:16

of humans.

play55:17

How can these fictional stories led by a few people

play55:23

convince the mass to literally kill or die

play55:27

for that fictional story?

play55:29

YUVAL NOAH HARARI: It again goes back to hacking the brain

play55:32

and hacking the human animal.

play55:36

It's been done throughout history, previously just

play55:39

by trial and error, without the deep knowledge of brain science

play55:45

and evolution we have today.

play55:46

But to give an example, like if you

play55:49

want to convince people to persecute

play55:53

and exterminate some other group of people, what you need to do

play55:57

is really latch onto the disgust mechanisms in the human brain.

play56:04

Evolution has shaped homo sapiens

play56:08

with very powerful disgust mechanisms in the brain

play56:12

to protect us against diseases, against all kinds of sources

play56:17

of potential disease.

play56:20

And if you look at the history of bias and prejudice

play56:24

and genocide, one recurring theme

play56:27

is that it repeatedly kind of latches

play56:31

onto these disgust mechanisms.

play56:35

And so you would find things like women are impure,

play56:40

or these other people, they smell bad

play56:44

and they bring diseases.

play56:46

And very, very often disgust is at the center.

play56:50

So you'll often find comparison between certain types of humans

play56:56

and rats or cockroaches, or all kinds

play56:59

of other disgusting things.

play57:02

So if you want to instigate genocide,

play57:05

you start by hacking the disgust mechanisms in the human brain.

play57:11

And this is very, very deep.

play57:14

And if it's done from an early age,

play57:17

it's extremely difficult afterwards.

play57:20

People can-- they know intellectually

play57:23

that it's wrong to say that these people are disgusting,

play57:27

that these people, they smell bad.

play57:31

But they know it intellectually.

play57:33

But when you place them, like, in a brain scanner,

play57:37

they can't help it.

play57:39

If they were raised--

play57:41

I mean, so we can still do something about it.

play57:43

We can still kind of defeat this.

play57:45

But it's very difficult, because it really

play57:47

goes to the core of the brain.

play57:49

WILSON WHITE: So I'll end on a final question,

play57:52

because we're at time.

play57:54

When Larry and Sergey, when they founded Google,

play57:56

they did so with this deep belief

play57:59

in technology's ability to improve people's lives

play58:03

everywhere.

play58:05

So if you had a magic wand and you could give Google

play58:10

the next big project for us to work on, in 30 seconds or less,

play58:14

what would you grant us as our assignment?

play58:18

YUVAL NOAH HARARI: An AI system that

play58:20

gets to know me in order to protect me and not in order

play58:24

to sell me products or make me click on advertisements and so

play58:28

forth.

play58:29

WILSON WHITE: All right.

play58:30

Mission accepted.

play58:31

[LAUGH]

play58:32

Thank you, guys.

play58:33

[APPLAUSE]

Rate This

5.0 / 5 (0 votes)

英語で要約が必要ですか?