Artificial Intelligence & Personhood: Crash Course Philosophy #23

CrashCourse
8 Aug 201609:25

Summary

TLDRIn this Crash Course Philosophy episode, the question of whether non-living beings like robots can be considered persons is explored. The video distinguishes between weak AI, which mimics human intelligence in a narrow sense, and strong AI, which possesses human-like cognitive abilities. The Turing Test is introduced as a method to determine if a machine can 'think' like a human. Philosophers like William Lycan argue for the personhood of AI, while John Searle's 'Chinese Room' thought experiment challenges the notion that AI can truly understand or 'think'. The episode ponders the implications of creating beings that meet the threshold of personhood.

Takeaways

  • 🤖 The script discusses the philosophical question of whether a robot or non-living being could be considered a person.
  • 🧠 It differentiates between 'Weak AI', which mimics certain human intelligence tasks, and 'Strong AI', which possesses human-like thinking abilities.
  • 👨‍🔬 Alan Turing's Turing Test is introduced as a method to evaluate if a machine can exhibit intelligent behavior indistinguishable from that of a human.
  • 🤔 The script ponders whether the ability to convincingly mimic human behavior is sufficient to establish personhood in machines.
  • 🧐 William Lycan's perspective is presented, arguing that a robot like 'Harry', with lifelike characteristics and behaviors, should be considered a person despite being artificially created.
  • 🧬 Lycan challenges the notion that programming negates personhood, pointing out that humans are also 'programmed' by genetics and upbringing.
  • 💭 The concept of the 'soul' as a differentiator between humans and robots is questioned, with Lycan suggesting that personhood should not be contingent on such metaphysical beliefs.
  • 🤷‍♂️ John Searle's 'Chinese Room' thought experiment is mentioned to critique the Turing Test, arguing that mere symbol manipulation does not equate to understanding or consciousness.
  • 🤝 The script concludes that personhood is a complex issue, and our current definitions may need to evolve as technology advances.
  • 🔮 The episode ends with a teaser for the next topic: the question of free will in the context of artificial intelligence.

Q & A

  • What is the main concern of the video regarding artificial intelligence?

    -The main concern is whether a non-living being, like a robot, could be considered a person, and how we should treat potential new persons if we create beings that meet the threshold of personhood.

  • What is the difference between Weak AI and Strong AI as discussed in the video?

    -Weak AI refers to machines or systems that mimic some aspect of human intelligence within a narrow range, like Siri or auto-correct. Strong AI, on the other hand, is a machine or system that actually thinks like humans, replicating whatever it is that our brains do.

  • What is the Turing Test, and what does it aim to demonstrate?

    -The Turing Test is a test devised by Alan Turing to demonstrate when a machine has developed the ability to think like humans. It involves a conversation with two individuals, one human and one AI, without knowing which is which, to see if the AI can fool a human into thinking it's human.

  • What is the 'Chinese Room' thought experiment, and what does it argue?

    -The 'Chinese Room' is a thought experiment by John Searle that argues simply passing for human, like in the Turing Test, is not sufficient to qualify for strong AI. It illustrates that a machine can manipulate symbols without understanding them, fooling people into thinking it knows something it doesn't.

  • According to William Lycan, why should a robot like Harry be considered a person?

    -William Lycan argues that a robot like Harry should be considered a person because it displays intentions and emotions, and can act beyond its programming, similar to humans. He also points out that our origins and material constitutions are different, but that doesn't negate personhood.

  • What is the significance of the Turing Test being behavior-based?

    -The significance of the Turing Test being behavior-based is that it suggests if a machine can convincingly mimic human behavior, it may be indistinguishable from a human in terms of thinking, as behavior is often the standard we use to judge each other's cognition.

  • Why does the video mention the idea of programming in relation to personhood?

    -The video mentions programming to argue that just as humans are 'programmed' by their genetic code and upbringing, robots can also be programmed to act in certain ways. This challenges the notion that programming inherently disqualifies a being from being considered a person.

  • What is the philosophical issue that the video suggests will be explored in the next episode?

    -The philosophical issue that will be explored in the next episode is whether any of us have free will, which is an issue that has been lurking around the discussion of artificial intelligence.

  • How does the video use the example of Harry to challenge the concept of personhood?

    -The video uses Harry, a humanoid robot with human-like characteristics and behaviors, to challenge the concept of personhood by suggesting that if we consider Harry a friend and attribute person-like qualities to him, it raises questions about what truly defines a person.

  • What is the role of the 'Thought Bubble' segment in the video?

    -The 'Thought Bubble' segment in the video serves to provide a deeper philosophical perspective on the main topic, in this case, offering John Searle's 'Chinese Room' thought experiment as a critique of the Turing Test and the concept of strong AI.

Outlines

00:00

🤖 The Question of Artificial Intelligence and Personhood

The script begins with a humorous scenario where the presenter expresses concern that their brother might be a robot, prompting a deeper discussion on the nature of personhood and the possibility of non-living beings, like robots, being considered persons. The distinction between weak AI, which mimics certain human intelligence tasks, and strong AI, which possesses human-like thought processes, is explained. The Turing Test, proposed by Alan Turing, is introduced as a method to determine if a machine can think like a human by its ability to fool a human into believing it is human. The script also touches on the philosophical implications of creating beings that meet the threshold of personhood and how society might treat them.

05:04

🧠 Philosophical Debates on AI and the Chinese Room

This paragraph delves into the philosophical debate surrounding AI and personhood. It introduces philosopher William Lycan's argument that a robot named Harry, despite being programmed, exhibits human-like behaviors and should be considered a person. The concept of programming is discussed in the context of both humans and AI, suggesting that our genetic predispositions and learned behaviors are a form of programming too. The script then presents John Searle's 'Chinese Room' thought experiment, which challenges the idea that a machine can have strong AI by merely manipulating symbols without understanding their meaning. The paragraph concludes with a reflection on the uncertainty of what constitutes a person and the presenter's unresolved suspicion about their brother's humanity.

Mindmap

Keywords

💡Robot

A robot is a machine capable of carrying out complex actions automatically, especially one programmable by a computer. In the video, the concept of a robot is explored in the context of personhood, questioning whether a robot could be considered a person if it exhibits human-like behaviors and intelligence, as suggested by the speaker's hypothetical concern about his brother John possibly being a robot.

💡Personhood

Personhood refers to the status or condition of being a person, especially in relation to having certain rights or legal standing. The video discusses the philosophical and ethical implications of determining personhood, particularly in the context of artificial beings like robots, and whether they could be considered persons if they meet certain criteria of intelligence and self-awareness.

💡Weak AI

Weak AI, also known as narrow AI, is a type of artificial intelligence designed to perform specific tasks or mimic certain aspects of human intelligence. The video uses examples like Siri, auto-correct, and calculators to illustrate weak AI, which is characterized by its limited range of abilities and inability to understand or learn beyond its programming.

💡Strong AI

Strong AI, or artificial general intelligence, refers to a machine or system that possesses the ability to understand, learn, and apply knowledge in a way that is comparable to human intelligence. The video discusses the concept of strong AI in relation to the Turing Test and the philosophical debate on whether a machine that can convincingly mimic human thought processes should be considered to possess strong AI.

💡Turing Test

The Turing Test is a test of a machine's ability to exhibit intelligent behavior that is indistinguishable from that of a human. Proposed by Alan Turing, the test is mentioned in the video as a method to determine if a machine has developed strong AI. The video script describes a scenario where participants converse with both a human and a machine without knowing which is which, and the machine's ability to deceive is key to passing the test.

💡Intentionality

Intentionality refers to the capacity of an entity to have intentions, goals, or desires. In the video, the concept is used to discuss the attributes that might make a robot or AI appear more 'human' or person-like, as it suggests a level of mental activity and purposefulness that is typically associated with human beings.

💡Programming

Programming is the process of creating a set of instructions that direct a computer or machine to perform specific tasks. The video script uses the term to discuss the nature of both human and artificial intelligence, suggesting that humans are also 'programmed' by their genetic code and environmental influences, just as AI is programmed by developers.

💡Soul

The concept of a 'soul' is often used to describe the immaterial essence of a person that distinguishes them from other life forms and objects. In the video, the existence of a soul is considered as a potential criterion for personhood, with the speaker questioning whether a robot could be considered a person if it lacks a soul, and whether the traditional concept of a soul is even applicable to AI.

💡Chinese Room

The Chinese Room is a thought experiment by philosopher John Searle that challenges the idea that a machine can have strong AI if it can pass the Turing Test. The video describes the experiment, where a person with no knowledge of Chinese uses a code book to respond to Chinese messages, fooling the senders into thinking the person understands Chinese. The experiment is used to argue that merely manipulating symbols without understanding does not equate to true intelligence or thought.

💡Free Will

Free will is the power of making choices that are neither determined by natural causality nor predestined by fate or divine will. The video concludes by hinting at the topic of free will in relation to AI, suggesting that future discussions will explore whether AI can possess free will, and what implications this might have for the concept of personhood in artificial beings.

Highlights

The possibility of determining if someone is a human or a robot is explored, emphasizing the difficulty of knowing another's inner workings without direct examination.

The philosophical question of whether a non-living being, such as a robot, could be considered a person is introduced.

The distinction between 'Weak AI', which mimics aspects of human intelligence, and 'Strong AI', which thinks like humans, is explained.

Alan Turing's Turing Test is described as a method to determine if a machine can think like a human.

The Turing Test involves a conversation with both a human and an AI, without knowing which is which, to assess if the AI can convincingly mimic human thought.

The behavior-based nature of the Turing Test is discussed, questioning if behavior is the standard we use to judge each other's personhood.

William Lycan's perspective on personhood and AI is presented, suggesting that a robot like 'Harry' could be considered a person despite being programmed.

Lycan argues that just as humans are programmed by genetics and upbringing, so too can a robot be programmed to exhibit person-like qualities.

The concept of a soul as a criterion for personhood is critiqued, suggesting that if souls exist, they could theoretically be given to robots as well.

John Searle's 'Chinese Room' thought experiment is introduced to challenge the idea that passing the Turing Test is sufficient for strong AI.

Searle argues that a machine can manipulate symbols without understanding, suggesting that passing for human doesn't equate to actual understanding.

The counterargument to the Chinese Room is presented, suggesting that the collective system, including the human and the code book, 'knows' Chinese.

The narrative concludes with a reflection on the uncertainty of what constitutes a person and the potential for robots to be considered as such.

The episode ends with a teaser for the next topic: the question of free will in the context of artificial intelligence.

The episode is sponsored by Squarespace, which offers a platform for creating professional websites without coding.

Transcripts

play00:03

Crash Course Philosophy is brought to you by Squarespace.

play00:06

Squarespace: share your passion with the world

play00:08

Ok, guys, real talk. Uhh, I’m kinda worried.

play00:11

I think my brother John might be a robot.

play00:14

I know, it sounds ridiculous. He looks like a human. Pretty much.

play00:17

And he acts like a human. Most of the time.

play00:19

But how could I really-100-percent-for-sure know that he is what he looks like?

play00:24

At least, without getting a close look at what’s inside him – in his head, his body, his inner workings?

play00:30

And keep in mind, I’m the younger brother.

play00:32

For all I know, Mom and Dad brought him home from Radio Shack, not the hospital.

play00:36

So. How can I tell whether my brother John Green is a human, or just a really intelligent machine?

play00:41

[Theme Music]

play00:51

A couple of weeks ago, we talked about what it means to be a person.

play00:54

But a subject that we need to explore a little better is whether a non-living being, like a robot, could be a person, too.

play01:00

This isn’t just a concern for science fiction writers.

play01:03

This issue matters, because technology is getting better all the time,

play01:06

and we need to figure out how we're going to treat potential new persons,

play01:11

if we end up creating beings that we decide meet the threshold of personhood.

play01:14

I'm talking about – robots, androids, replicants, cylons whatever you call ‘em.

play01:18

If you read and watch the right stuff, you know who I’m talking about.

play01:21

Now, you might be thinking: Don't we have artificial intelligence already? Like, on my phone?

play01:24

Well, yeah. But the kind of AI that we use to send our texts, proof-read our emails,

play01:29

and plot our commutes to work is pretty weak in the technical sense.

play01:32

A machine or system that mimics some aspect of human intelligence is known as Weak AI.

play01:37

Siri is a good example, but similar technology has been around a lot longer than that.

play01:41

Auto-correct, spell-check, even old school calculators are capable of mimicking portions of human intelligence.

play01:46

Weak AI is characterized by its relatively narrow range of thought-like abilities.

play01:51

Strong AI, on the other hand, is a machine or system that actually thinks like us.

play01:56

Whatever it is that our brains do, strong AI is an inorganic system that does the same thing.

play02:01

While weak AI has been around for a long time, and keeps getting stronger,

play02:04

we have yet to design a system with strong AI.

play02:06

But what would it mean for something to have strong AI

play02:10

Would we even know when it happened?

play02:12

Way back in 1950, British mathematician Alan Turing was thinking about this very question.

play02:16

And he devised a test – called the Turing Test – that he thought would be able to demonstrate when a machine had developed the ability to think like us.

play02:23

Turing’s description of the test was a product of its time – a time in which there were really no computers, to speak of.

play02:28

But if Turing were describing it today, it would probably go something like this:

play02:31

You’re having a conversation, via text, with two individuals.

play02:34

One is a human, and the other is a computer or AI of some kind.

play02:38

And you aren’t told which is which.

play02:40

You may ask both of your interlocutors anything you would like,

play02:43

and they are free to answer however they like – they can even lie.

play02:46

Do you think you’d be able to tell which one was the human?

play02:49

How would you tell? What sort of questions would you ask?

play02:51

And what kind of answers would you expect back?

play02:53

A machine with complex enough programming ought to be able to fool you into believing you’re conversing with another human.

play02:58

And Turing said, if a machine can fool a human into thinking it's a human, then it has strong AI.

play03:04

So in his view, all it means for something to think like us is for it to be able to convince us that it’s thinking like us.

play03:09

If we can’t tell the difference, there really is no difference.

play03:12

It's a strictly behavior-based test.

play03:14

And if you think about it, isn’t behavior really the standard we use to judge each other?

play03:18

I mean, really, I could be a robot!

play03:20

So could these guys who are helping me shoot this episode.

play03:22

The reason I don’t think I’m working with a bunch of androids is that they act the way that I’ve come to expect people to act.

play03:28

At least, most of the time.

play03:29

And when we see someone displaying behaviors that seem a lot like ours

play03:32

– displaying things like intentionality and understanding –

play03:35

we assume that they have intentionality and understanding.

play03:38

Now, fast-forward a few decades, and meet contemporary American philosopher William Lycan.

play03:43

He agrees with Turing on many points, and has the benefit of living in a time when artificial intelligence has advanced like crazy.

play03:49

But Lycan recognizes that a lot of people still think that you can make a person-like robot,

play03:53

but you could never actually make a robot that’s a person.

play03:57

And for those people, Lycan would offer up this guy for consideration: Harry.

play04:00

Harry is a humanoid robot with lifelike skin. He can play golf and the viola.

play04:04

He gets nervous. He makes love. He has a weakness for expensive gin.

play04:08

Harry, like John Green, gives every impression of being a person.

play04:12

He has intentions and emotions. You consider him to be your friend.

play04:16

So if Harry gets a cut, and then motor oil, rather than blood, spills out, you would certainly be surprised.

play04:21

But, Lycan says, this revelation shouldn’t cause you to downgrade Harry’s cognitive state from “person” to “person-like.”

play04:28

If you would argue that Harry’s not a person, then what’s he missing?

play04:31

One possible answer is that he’s not a person because he was programmed.

play04:34

Lycan’s response to that is, well, weren’t we all?

play04:37

Each of us came loaded with a genetic code that predisposed us to all sorts of different things –

play04:42

you might have a short fuse like your mom, or a dry sense of humor like your grandfather.

play04:46

And in addition to the coding you had at birth, you were programmed in all sorts of other ways by your parents and teachers.

play04:51

You were programmed to use a toilet, silverware, to speak English, rather than Portuguese.

play04:56

Unless, of course, you speak Portuguese. But if you do, you were still programmed.

play04:59

And what do you think I’m doing right now? I’m programming you!

play05:03

Sure, you have the ability to go beyond your programming, but so does Harry. That’s Lycan’s point.

play05:08

Now another distinction that you might make between persons like us and Harry is that we have souls and Harry doesn’t.

play05:14

Now, you’ve probably seen enough Crash Course Philosophy by now to know how problematic this argument is.

play05:18

But let’s suppose there is a God, and let’s suppose that he gave each of us a soul.

play05:23

We of course have no idea what the process of “ensoulment” might look like.

play05:26

But suffice it to say, if God can zap a soul into a fertilized egg or a newborn baby,

play05:31

there’s no real reason to suppose he couldn’t zap one into Harry as well.

play05:35

Harry can’t reproduce, but neither can plenty of humans, and we don’t call them non-persons.

play05:39

He doesn’t have blood but, really, do you think that that’s the thing that makes you you?

play05:43

Lycan says Harry’s a person.

play05:45

His origin and his material constitution are different than yours and mine, but who cares?

play05:50

After all, there have been times and places in which having a different color of skin, or different sex organs,

play05:55

has caused someone to be labeled a “non-person,” but we know that kind of thinking doesn't hold up to scrutiny.

play06:01

Back in 1950, Turing knew no machine could pass his test.

play06:04

But he thought it would happen by the year 2000.

play06:06

It turns out, though, that because we can think outside of our programming in ways that computer programs can't,

play06:12

it's been really hard to design a program that can pass the Turing Test.

play06:15

But what will happen when an something can?

play06:17

Many argue that, even if a machine does pass the Turing test, that doesn't tell us that it actually has strong AI.

play06:22

These objectors argue that there's more to “thinking like us” than simply being able to fool us.

play06:27

Let’s head over to the Thought Bubble for some Flash Philosophy.

play06:29

Contemporary American philosopher John Searle constructed a famous thought experiment called the “Chinese Room,”

play06:34

designed to show that passing for human isn’t sufficient to qualify for strong AI.

play06:39

Imagine you're a person who speaks no Chinese.

play06:42

You’re locked in a room with boxes filled with Chinese characters,

play06:45

and a code book in English with instructions about what characters to use in response to what input.

play06:50

Native Chinese speakers pass written messages, in Chinese, into the room.

play06:53

Using the code book, you figure out how to respond to the characters you receive,

play06:57

and you pass out the appropriate characters in return.

play07:00

You have no idea what any of it means, but you successfully follow the code.

play07:03

You do this so well, in fact, that the native Chinese speakers believe you know Chinese.

play07:08

You’ve passed the Chinese-speaking Turing Test.

play07:11

But do you know Chinese? Of course not.

play07:13

You just know how to manipulate symbols – with no understanding of what they mean –

play07:17

in a way that fools people into thinking you know something you don't.

play07:20

Likewise, according to Searle, the fact that a machine can fool someone into thinking it’s a person doesn't mean it has strong AI.

play07:26

Searle argues that strong AI would require that the machine have actual understanding,

play07:31

which he thinks is impossible for a computer to ever achieve.

play07:34

Thanks, Thought Bubble! One more point before we get out of here.

play07:36

Some people have responded to the Chinese Room thought experiment by saying, sure, you don’t know Chinese.

play07:41

But, no particular region of your brain knows English, either.

play07:44

The whole system that is your brain knows English.

play07:47

Likewise, the whole system that is the Chinese Room – you, the code book, the symbols –

play07:51

together know Chinese, even though the particular piece of the system that is you, does not.

play07:57

So…I’ve been thinking about it. I’m still not convinced John isn’t a robot.

play08:01

In fact, Harry really drove home the point for me that we don’t actually know what’s going on inside any of us.

play08:06

But if it would turn out that John – the John I’ve known my entire life –

play08:09

has motor oil instead of blood inside, well, he’d still be my brother.

play08:13

Today we learned about artificial intelligence, including weak AI and strong AI,

play08:17

and the various ways that thinkers have tried to define strong AI.

play08:20

We considered the Turing Test, and John Searle’s response to the Turing Test, the Chinese Room.

play08:25

We also talked about William Lycan, Harry, and my brother, the still-possibly-but-probably-not android.

play08:30

Next time, we’ll look into an issue that has been lurking around this discussion of artificial intelligence:

play08:35

do any of us have free will?

play08:37

This episode is brought to you by Squarespace.

play08:40

Squarespace helps to create websites, blogs or online stores for you and your ideas.

play08:44

Websites look professionally designed regardless of skill level, no coding required.

play08:49

Try Squarespace at squarespace.com/crashcourse for a special offer.

play08:53

Squarespace: share your passion with the world.

play08:55

Crash Course Philosophy is produced in association with PBS Digital Studios.

play08:58

You can head over to their channel and check out a playlist of the latest episodes from shows like

play09:02

PBS OffBook, The Art Assignment, and Blank on Blank.

play09:06

This episode of Crash Course was filmed in the Doctor Cheryl C. Kinney Crash Course Studio

play09:10

with the help of all of these awesome people and our equally fantastic graphics team is Thought Cafe.

Rate This

5.0 / 5 (0 votes)

Related Tags
AI philosophyTuring Teststrong AIrobotspersonhoodJohn SearleChinese Roomtechnology ethicsfree willCrash Course