3. Cognitive Architectures

MIT OpenCourseWare
4 Mar 2014110:36

Summary

TLDR在这段视频中,教授深入探讨了人工智能和人类解决问题的能力。他首先强调了人类婴儿通过观察他人行为而获得的惊人能力,随后批评了当前人工智能领域过于依赖物理领域的成功模式,即寻找一套规则来解释现象。教授提倡需要更高层次的表征,并讨论了人工智能在模拟人类思维方面的挑战,包括情感和问题解决过程的多样性。他还提到了认知心理学中缺乏对问题解决活动的主要讨论,并探讨了传统认知心理学的局限性。教授通过对人工智能未来发展的展望,强调了AI在照顾老年人和增强人类认知能力方面的潜力,同时提出了关于人工智能的伦理和哲学问题,包括机器是否能够或应该发展出自我意识。最后,教授以幽默的方式讨论了人类意识和大脑工作方式的未解之谜,结束了这段内容丰富、引人深思的讲座。

Takeaways

  • 🌟 教授主要关注的是理解人类如何能够解决多种问题,特别是人类婴儿通过观察他人行为在成长过程中学习的能力。
  • 📚 过去500年科学在物理学上的巨大成功,通过找到像牛顿定律这样的基本规则集,解释了广泛的日常现象。
  • 🧐 教授对人工智能社区的批评在于,他们忽视了心理学和认知科学的重要性,而过于依赖物理科学的成功模式。
  • 🤔 人类解决问题的能力涉及到高级的表征,而不仅仅是简单的刺激-反应模式。
  • 📈 教授认为,为了获得类似人类智能的能力,需要有各种高级的表征形式,而现有的人工智能理论并未充分探索这些表征可能是什么。
  • 😶 教授提出了一个问题:为什么我们有数百个描述情感的词汇,但在描述智力或问题解决过程时,却缺乏专业的词汇。
  • 🤓 教授提倡在人工智能研究中使用更丰富的内部表征和问题解决策略,而不仅仅是基于规则的系统。
  • 👴 教授对未来人工智能的一个重要应用是照顾老年人,因为随着人们寿命的延长,将需要机器来帮助照顾日益增多的老年人口。
  • 🤖 教授认为,尽管目前人工智能的威胁性不大,但未来可能会突然成为问题,我们需要谨慎对待其发展。
  • 📚 教授推荐了Aaron Sloman的网站,作为深入研究人工智能和认知科学的资源。
  • 🎨 教授认为艺术和科学之间没有太大区别,艺术家和工程师都需要解决问题,只是他们面对的问题类型不同。
  • ❓ 教授对于人工智能是否能够发展出真正的“自我意识”持开放态度,认为这可能与人类自我意识有本质的不同。

Q & A

  • 教授提到了人类婴儿通过观察他人能够解决多种问题,这是指什么?

    -这指的是人类婴儿通过社会化学习,能够模仿和学习其他人类的行为和解决问题的方法,这是人类认知发展的一个重要方面。

  • 教授对于人工智能社区的看法是什么?

    -教授认为人工智能社区过于依赖物理科学领域的成功经验,试图寻找像牛顿运动定律那样的简单规则集来解释智能行为,而忽视了人类智能的复杂性和多样性。

  • 为什么教授认为20世纪60年代和70年代的人工智能理论是重要的?

    -教授认为那个时期的人工智能理论研究了内部表征的概念,即我们思考的内容在大脑中的表示方式,这对理解人类智能至关重要。

  • 教授提到了关于解决问题的一个观点,即当一个解决方案不起作用时,我们会尝试另一种方式,这是怎样的过程?

    -这是一个试错的过程,当我们面对问题时,大脑会尝试不同的表征或解决方案,如果当前的方法不起作用,我们会调整思路,尝试另一种方法,直到找到解决问题的途径。

  • 教授对于“规则基础系统”的看法是什么?

    -教授认为规则基础系统在某些情况下是有用的,但它不足以解释人类的高级思考过程。他批评了这种系统过于简单化,不能涵盖人类解决问题的复杂性。

  • 教授提到了关于机器人发展的一些历史,他对此有什么看法?

    -教授认为尽管机器人技术取得了一些进展,但仍然存在很多挑战,例如在福岛核事故中,没有机器人能够进入反应堆进行操作。他暗示了机器人技术的发展并不如人们期望的那样迅速。

  • 教授对于人工智能的未来有什么预测?

    -教授预测人工智能最终将能够解决人类所谓的日常常识问题,并且可能会在很多方面不同于人类大脑的工作方式,但这并不重要,因为这些AI系统将有助于我们理解人类智能。

  • 教授如何描述人工智能在人类长寿问题上的潜在作用?

    -教授认为随着人们寿命的延长,我们将需要人工智能来照顾日益增长的老年人口,因为未来可能没有足够的年轻劳动力来照顾他们。

  • 教授对于人类智能和人工智能之间关系的看法是什么?

    -教授认为我们不需要完全模拟人类智能来构建人工智能系统。即使AI系统在某些方面与人类智能不同,它们仍然可以非常有效地解决人类认为难以解决的问题。

  • 教授提到了关于编程语言的观点,他认为我们需要的是哪种编程语言?

    -教授认为我们需要一种新的编程语言,这种语言的指令能够描述目标和子目标,而不仅仅是具体的操作步骤。

  • 教授如何看待情感和智力思考之间的关系?

    -教授指出人类有数百个词来描述情感,但描述智力思考过程的词汇却相对较少。他认为认知心理学领域应该更多地关注思考过程的分类和描述。

Outlines

00:00

📚 教育资源共享与人类解决问题的能力

该段落主要讨论了教育资源共享的重要性,特别是对MIT OpenCourseWare的支持,以及对人类解决问题能力的理论探讨。教授提出了对动物和人类解决问题能力的看法,批评了当前人工智能领域过于依赖物理科学成就的倾向,并强调了心理学和认知科学在理解人类认知能力方面的重要性。

05:02

🤔 人类认知与人工智能的局限性

教授在这一段落中深入探讨了人类认知的复杂性,包括我们如何通过观察和学习解决问题,以及当前人工智能在模拟人类认知方面的局限性。他还提到了关于内部表征和问题解决策略的思考,以及对现有技术如Watson和Wolfram Alpha的评价。

10:04

📈 问题解决策略与情感表达的多样性

在这一部分,教授讨论了人类在解决问题时采用的不同策略,包括情感和心理过程的多样性。他批评了所谓的“规则基础系统”,并提出了对于认知心理学研究的期望,希望有更多关注于问题解决过程的研究。

15:09

🤖 机器人技术与人工智能的未来

教授分享了关于机器人技术发展的一些看法,包括历史上的一些项目和个人经验。他讨论了机器人技术的局限性和未来的可能性,以及人工智能如何帮助我们解决未来可能面临的挑战,特别是在人口老龄化的背景下。

20:09

🧐 认知心理学与人工智能的交叉

这一段落中,教授探讨了认知心理学和人工智能之间的关系,强调了认知心理学对于理解人类思维过程的重要性。同时,他也提到了编程语言的发展对于人工智能研究的影响,以及对于未来编程语言的期望。

25:09

🌟 人工智能的终极目标

教授在这部分讨论了人工智能的终极目标,包括对人类未来的预测和对人工智能在其中所扮演角色的思考。他提出了对于人类寿命延长和社会结构变化的看法,以及人工智能如何帮助我们适应这些变化。

30:11

🎨 艺术与工程的创造性思考

教授比较了艺术家和工程师的创造性思考过程,强调了两者在解决问题时的相似性和差异。他提到了创造性思维在艺术和工程中的应用,并讨论了创造性思维的来源和过程。

35:16

🧬 生物学与人工智能的交叉

在这一部分,教授讨论了生物学对于人工智能研究的影响,包括对生物行为的研究如何启发人工智能的发展。他还提到了一些具体的生物学研究案例,如对蝌蚪和神经生长激素的研究。

40:17

🤔 思维模式与人工智能的反思

教授在这一段落中探讨了人类思维模式的层次结构,以及这些模式如何影响我们对人工智能的设计和理解。他讨论了反馈和自引用在思维过程中的重要性,并提出了对于构建人工智能系统的一些见解。

45:17

🧘‍♂️ 思维的静寂与人工智能的安全性

这一段落中,教授讨论了人类在静寂中的思维状态,以及这与人工智能的比较。他还提到了关于人工智能安全性的讨论,包括对黑客攻击和网络安全的担忧,以及人工智能可能带来的灾难性风险。

Mindmap

Keywords

💡人工智能

人工智能(Artificial Intelligence, AI)是指由人造系统所表现出来的智能行为。在视频中,教授讨论了人工智能的发展,特别是它在模拟人类解决问题能力方面的挑战和进展。例如,教授提到了人工智能在理解常识性问题上的局限性,以及它在未来可能对人类社会的影响。

💡认知心理学

认知心理学是心理学的一个分支,它研究人类的认知过程,包括感知、记忆、思维、语言和解决问题等。视频中提到,尽管认知心理学对理解人类思维过程至关重要,但目前还没有一个统一的理论能够全面解释人类的思考方式。

💡神经科学

神经科学是研究神经系统的结构、功能、发展、遗传、生理、病理等方面的科学。在视频中,教授指出神经科学在理解人类感觉和运动系统方面取得了进展,但与人工智能的高级理论之间仍存在差距。

💡问题解决

问题解决是指使用认知过程来识别问题、分析问题并找到解决方案的过程。视频中,教授探讨了人类如何使用不同的策略来解决问题,并且提到了人工智能在这方面的尝试和挑战。

💡表征

在认知心理学和人工智能中,表征指的是信息在大脑或系统中的内部编码或组织形式。教授在视频中讨论了表征在思考过程中的重要性,以及如何通过不同的方式表征信息来促进问题解决。

💡进化

进化是指生物种类随时间变化的逐渐发展和分化的过程。视频中提到,不同动物通过进化形成了解决特定问题的能力,而人类则通过观察和学习来解决各种问题。

💡规则系统

规则系统是一种基于一系列规则的计算系统,用于处理数据和做出决策。在视频中,教授批评了过度依赖规则系统的方法,认为它无法充分捕捉人类的认知复杂性。

💡自我模型

自我模型是指个体对自己存在和功能的理解。视频中,教授提出了关于人工智能是否能够发展出自我模型的问题,以及这可能如何影响其行为和决策。

💡创造力

创造力是指产生新颖和有价值的想法的能力。教授在视频中讨论了创造力在艺术和科学中的作用,并提出了艺术家和工程师在解决问题时的不同方法。

💡社会生物学

社会生物学是一门研究动物社会行为的进化起源和生物学基础的学科。视频中提到了社会生物学在理解人类行为方面的作用,以及它在科学和社会中的争议。

💡模式识别

模式识别是指识别数据中的模式或规律的过程,它是人工智能和人类认知的一个基本方面。在视频中,教授通过一个关于数字序列的例子,探讨了人类在模式识别方面的能力。

Highlights

MIT OpenCourseWare 致力于提供免费的高质量教育资源,支持者的资金帮助其持续发展。

教授关注于理解人类解决多种问题的能力,特别是婴儿通过观察人类行为而学习的能力。

与物理学通过几条定律解释现象不同,人类的认知和问题解决可能需要更复杂的内部表征。

20世纪的科学成功在于找到规则集合,如牛顿定律和麦克斯韦方程,但人类认知可能不遵循同样的规则简化。

教授对人工智能社区的批评在于,过于依赖物理科学的成功模式,而忽视了人类认知的复杂性。

教授提到,尽管存在多种问题解决方式,但在心理学书籍中很少讨论常识思维的主要活动。

传统的条件反射理论不足以解释人类的认知过程,需要更高级的语义结构来代表刺激。

教授讨论了人工智能在模拟人类智力方面的挑战,包括对内部表征的研究。

教授提到了关于人工智能的早期理论和其对现代AI的影响,包括对60年代至80年代AI理论的批评。

教授对现有的AI系统,如IBM的Watson,进行了批评,认为它们缺乏对常识推理的理解。

Wolfram Alpha展示了如何通过数学方式找到问题的多种表示,提供了比传统搜索引擎更好的答案。

教授讨论了人类如何使用不同的问题解决策略,包括在遇到难题时放弃并稍后得到答案的现象。

教授批评了基于规则的系统,认为它们过于简单化,无法捕捉人类认知的复杂性。

教授提出了关于认知心理学领域的批评,认为它没有充分关注问题解决过程的研究。

教授讨论了认知过程中的情感和情绪词汇,以及如何找到描述智力活动的词汇。

教授推荐了Aaron Sloman的网站作为深入研究人工智能和认知科学的资源。

教授对人工智能的未来进行了预测,包括对老年护理和社会结构变化的见解。

教授讨论了人工智能在模拟人类认知时可能不需要完全复制人类智能,而是解决实际问题。

教授对编程语言的现状进行了批评,并提出了需要一种新的以目标为导向的编程语言的想法。

Transcripts

play00:00

The following content is provided under a Creative

play00:02

Commons license.

play00:03

Your support will help MIT OpenCourseWare

play00:06

continue to offer high quality educational resources for free.

play00:10

To make a donation or to view additional materials

play00:12

from hundreds of MIT courses, visit

play00:26

PROFESSOR: So really, what my main concern

play00:33

has been for quite a few years is

play00:39

to make some theory of what makes people able to solve

play00:49

so many kinds of problems.

play00:51

I guess, if you ran through the spectrum of all the animals,

play00:59

you'd find lots of problems that some animals can solve

play01:03

and people can't, like how many of you could build a beaver dam

play01:09

and/or termite nest.

play01:14

So there are all sorts of things that evolution

play01:17

manages to produce.

play01:20

But maybe the most impressive one

play01:23

is what the human infant can do just

play01:29

by hanging around for 10, or 20, or 30

play01:32

years and watching what other humans can do.

play01:38

So we can solve all sorts of problems,

play01:41

and my quarrel with most of the artificial intelligence

play01:50

community has been that the great success of science

play01:57

in the last 500 years really has been in physics.

play02:02

And it's been rewarded by finding little sets of rules,

play02:09

like Newton's three laws, and Maxwell's four

play02:12

laws, and Einstein's one law or two,

play02:22

that explained a huge range of everyday phenomena.

play02:28

Of course, in the 1920s and '30s,

play02:31

that apple cart got upset.

play02:35

Actually, Einstein himself, who had discovered

play02:40

the first quantum phenomena, namely

play02:43

the quantization of photons, had produced

play02:52

various scientific laboratory observations

play02:57

that were inexplicable in terms of either Maxwell,

play03:01

or Newton, or Einstein's earlier formulations.

play03:07

So my picture of the history is that in the 19th century

play03:15

and a little bit earlier going back to Lock, and Spinoza,

play03:19

and Hume, and a few of those philosophers, even

play03:23

Immanuel Kant, they had some pretty good

play03:26

psychological ideas.

play03:28

And as I mentioned the other day,

play03:31

I suspect that Aristotle was more

play03:35

like a modern cognitive psychologist

play03:38

and had even better ideas.

play03:40

But we've probably lost a lot of them,

play03:42

because there are no tape recorders.

play03:45

Who knows what Aristotle and Plato said that their students

play03:50

didn't write down?

play03:52

Because it sounded silly.

play04:02

The idea that we developed around here, mostly,

play04:09

Seymour Papert, and a lot of students--

play04:12

Pat Winston was one of the great stars of that period.

play04:18

--was the idea that to get anything

play04:22

like human intellectual abilities,

play04:26

you're going to have to have all sorts of high level

play04:29

representations.

play04:32

So one has to say, the old conditioned reflex of stimulus

play04:38

producing a response isn't good enough.

play04:42

The stimulus has to be represented

play04:44

by some kind of semantic structure

play04:47

somewhere in the brain or mind.

play04:52

So far as I know, it's only in the theories

play04:57

of not even modern artificial intelligence,

play05:02

but the AI of the '60s, and '70s, and '80s,

play05:06

that people thought about what could

play05:11

be the internal representation of the kinds of things

play05:15

that we think about.

play05:18

And even more important, if one of those representations,

play05:24

you see something, or you remember some incident.

play05:27

And your brain represents it in some way.

play05:30

And if that way doesn't work, you take a breath.

play05:33

And you sort of stumble around and find another way

play05:37

to represent it.

play05:39

Maybe when the original event first happened,

play05:43

you represented it in three or four ways.

play05:46

So we're beginning to see--

play05:50

did anybody hear Ferucci's talk?

play05:54

The Watson guy was up here a couple of days ago.

play05:59

I missed it, but they haven't made a technical publication

play06:05

as far as I know of how this Watson program works.

play06:09

But it sounds like it's something

play06:11

of a interesting society of mind like structure,

play06:15

and it'd be nice if they would--

play06:17

has anybody read any long paper on it?

play06:21

There have been a lot of press reports.

play06:23

Have you seen anything, Pat?

play06:28

Anyway, they seem to have done some sorts

play06:32

of commonsense reasoning.

play06:33

As I said the other day, I doubt that Watson could understand

play06:38

why you can pull something with a string, but you can't push.

play06:45

Actually, I don't know if any existing program

play06:48

can understand that yet.

play06:52

I saw some amazing demonstrations Monday

play07:00

by Steve Wolfram of his Wolfram Alpha, which doesn't

play07:10

do much common sense reasoning.

play07:12

But what it does do is, if you put in a sentence,

play07:17

it finds five or 10 different representations,

play07:21

anything you can find that's sort of mathematical.

play07:25

So when you ask a question, it gives you 10 answers,

play07:28

and it's much better than previous systems.

play07:31

Because it doesn't-- well, Google gives you a quarter

play07:38

million answers.

play07:39

But that's too many.

play07:43

Anyway, I'm just going to talk a little bit more,

play07:50

and everybody should be trying to think of a question

play07:54

that the rest of the class might answer.

play07:59

So there are lots of different kinds of problems that people

play08:02

can solve going back to the first one,

play08:05

like which moving object out there is my mother

play08:09

and which might be a potential threat.

play08:14

So there are a lot of kinds of problems that we solve,

play08:18

and I've never seen any discussion

play08:21

in psychology books of what are the principal activities

play08:30

of common sense thinking.

play08:32

Somehow, they don't have--

play08:39

or people don't-- before computers,

play08:42

there really wasn't any way to think about high level

play08:45

thinking.

play08:46

Because there weren't any technically usable ways

play08:52

to describe complicated processes.

play08:55

The idea of a conditional expression

play09:00

was barely on the threshold of psychology,

play09:06

so what kinds of problems do we have?

play09:09

And if you take some particular problem,

play09:11

like I find these days, I can't get the top off bottles.

play09:19

So how do I solve that?

play09:21

And there are lots of answers.

play09:26

One is you look for somebody who looks really strong.

play09:30

Or you reach into your pocket, and you probably

play09:37

have one of these and so on.

play09:44

There must be some way to put it on the floor, and step on it,

play09:47

and kick it with the other foot.

play09:53

So there are lots of problems that we're facing every day.

play09:57

And if you look in traditional cognitive psychology--

play10:03

well, what's the worst theory?

play10:05

The worst and the best theory got popular in the 1980s,

play10:10

and it was called rule based systems.

play10:14

And you just have a big library, which says,

play10:18

if you have a soda bottle and you can't get the cap off,

play10:21

then do this, or that, or the other.

play10:24

So some people decided, well, that's really all you need.

play10:31

Rod Brooks in the 1980s sort of said,

play10:36

we don't need those fancy theories

play10:38

that people, like Minsky, and Papert,

play10:40

and Winston are working on.

play10:42

Why not just say for each situation in the outer world

play10:47

have a rules that says how to deal with that situation?

play10:51

Let's make a hierarchy of them, and he

play10:54

described a system that sort of looked like the priority

play10:57

interrupt system in a computer.

play11:03

And he won all sorts of prizes for this really bad idea

play11:07

that spread around the world, but it

play11:10

solved a lot of problems.

play11:12

There are things about priority interrupt

play11:15

that aren't obvious, like suppose you have--

play11:20

in the first computers, there was some problem.

play11:22

Because what should you do, if there's

play11:25

several signals coming into the computer,

play11:27

and you want to respond to them?

play11:30

And some of the signals are very fast and very short.

play11:36

Then you might think, well, I should give the highest

play11:40

priority to the signal that's going to be there

play11:45

the shortest time or something like that.

play11:48

The funny part is that when you made such a system,

play11:52

the result was that, if you had a computer that

play11:56

was responding to some signal that's coming in at a--

play12:01

I'm talking about the days when computers were only

play12:03

working at a few kilohertz, few thousand operations a second.

play12:08

God, that's slow, a million times shorter

play12:11

than what you have in your pocket.

play12:15

And if you give priority to the signals that

play12:20

have to be reacted to very fast, then

play12:23

what happens if you type to those computers?

play12:25

It would never see them, because it's always--

play12:28

I saw this happening once.

play12:31

And finally, somebody realized that you

play12:34

should give the highest priority to the inputs that

play12:39

come in least frequently, because there's always--

play12:44

otherwise, if there's something coming in very frequently,

play12:47

you'll just always be responding to it.

play12:50

Any of you run into this?

play12:55

It took me a while to figure out why.

play13:00

Anyway, there are lots of kinds of problems.

play13:04

And the other day, I was complaining

play13:10

that we didn't have enough ways to do this.

play13:15

We had hundreds of words for emotions,

play13:18

and here's a couple of dozen.

play13:23

They're in chapter seven and eight actually most of these.

play13:28

So here's a bunch of words for describing ways to think,

play13:33

but they're not very technical.

play13:35

So you can talk about remorse, and sorrow,

play13:38

and blah, blah, blah.

play13:41

Hundreds and hundreds of words for feelings, and it's

play13:46

a lot of effort to find a dozen words for intellectual, for--

play13:54

what should I call them? --problem solving processes.

play13:58

So it's curious to me that the great field

play14:01

called cognitive psychology has not focused in that direction.

play14:07

Anyway, here's about 20 or 30 of them.

play14:09

And you'll find them scattered through chapters seven

play14:13

and eight.

play14:17

Here's my favorite one, and I don't

play14:19

know of any proper name for it.

play14:21

But if you're trying to solve a problem, and you're stuck,

play14:26

and the example that comes to my mind

play14:29

is, if I'm trying to remember someone's name,

play14:33

I can tell when it's hopeless.

play14:35

And the reason is that for somehow or other,

play14:44

I know that there's a huge tree of choices.

play14:47

That's one way to represent what's going on,

play14:51

and I might know that--

play14:54

I'm sure that name has a Z in it.

play14:59

So you search around and try everything you can.

play15:02

But of course, it doesn't have a Z,

play15:08

so the way to solve that problem is to give up.

play15:14

And then a couple of minutes later, the name occurs to you.

play15:20

And you have no idea how it happened and so forth.

play15:31

Anyway, the long story is that Papert, and I,

play15:36

and lots of really great students in the '60s and '70s

play15:46

spent a lot of time making little bottles of problem

play15:49

solvers that didn't work.

play15:51

And we discovered that you needed something else,

play15:55

and we had put that in.

play15:58

Other people would come and say, that's hopeless.

play16:03

You're putting in more things than you need.

play16:06

And my conclusion is that, wow, it's the opposite of physics.

play16:12

In physics, you're always trying to find--

play16:15

what is it called?

play16:16

--Occam's razor.

play16:18

Never have more structure than you need, because what?

play16:24

Well, it'll waste your time, but my feeling was, never

play16:30

have less than you'll need.

play16:32

But you don't know how many you'll need.

play16:34

So what I did, I had four of these,

play16:37

and then I forced myself to put in two more.

play16:40

And people ask, what's the difference between self models

play16:43

and self-conscious processes?

play16:45

And I don't care.

play16:48

Well, what's the difference between

play16:49

self-conscious and reflective?

play16:51

I don't care.

play16:53

And the reason is that, wow, it's

play16:56

nice to have a box that isn't full yet.

play16:59

So if you find something that your previous theory--

play17:05

going back to Brooks, he was so successful

play17:10

getting simple robots to work that he concluded

play17:14

that the things didn't need any internal representations

play17:18

at all.

play17:19

And for some mysterious reason, the Artificial Intelligence

play17:23

Society gave him their annual big prize for this very wrong

play17:28

idea, and it caused AI research to sort of half collapse

play17:33

in places, like Japan.

play17:35

He said, oh, rule based systems is all we need.

play17:40

Anybody want to defend him?

play17:44

The odd thing is, if you talk to Brooks,

play17:46

he's one of the best philosophers you'll ever meet.

play17:49

And he says, oh yes, of course, that's wrong,

play17:53

but it helps people do research and get things done.

play17:57

And as, I think, I mentioned the other day

play18:02

when the 3 Mile Island thing happened,

play18:09

there was no way to get into the reactor.

play18:12

That was 1980.

play18:14

And 30 years later when the--

play18:19

how do you pronounce it?

play18:21

--Fukushima accident happened, there

play18:26

was no robot that could go in and open a door.

play18:34

I don't know who to blame for that.

play18:37

Maybe us.

play18:41

But my picture of the history is that the places

play18:44

that did research on robotics, there were quite a few places.

play18:50

And for example, Carnegie Mellon was

play18:53

very impressive in getting the Sony dogs to play soccer,

play18:59

and they're still at it.

play19:00

And I think I mentioned that Sony still has a stock of--

play19:06

what's it called?

play19:07

AUDIENCE: AIBOs.

play19:10

PROFESSOR: Say it again.

play19:11

AUDIENCE: AIBOs.

play19:12

PROFESSOR: FIBO?

play19:13

AUDIENCE: AIBO, A-I-B-O.

play19:15

PROFESSOR: All right, AIBOs, but the trouble

play19:21

is they're always broken.

play19:27

There was a robot here called Cog that Brooks made,

play19:30

and it sometimes worked.

play19:32

But usually, it wasn't working, so only one student

play19:35

at that time could experiment with the robot.

play19:39

What was that wonderful project of trying to make a walking

play19:42

machine for four years in--

play19:47

there was a project to make a robot walk.

play19:51

And there was only one of it, so first, only one student

play19:55

at a time can do research on it.

play19:58

And most of the time, something's broken,

play20:01

and you're fixing it.

play20:02

So you end up that you sort of get five or 10 hours a week

play20:09

on your laboratory physical robot.

play20:13

At the same time, Ed Friedkin had

play20:16

a student who tried to make a walking robot,

play20:19

and it was a stick figure on the screen.

play20:23

I forgot the student's name.

play20:27

But anyway, he simulated gravity and a few other things.

play20:33

And in a couple of weeks, he had a pretty good robot

play20:36

that could walk, and go around turns, and bank.

play20:40

And if you simulated an oily floor,

play20:44

it could slip and fall, which we considered the high point

play20:49

of the demo actually.

play20:55

So there we find--

play21:06

anyway, I've sort of asked you to read my two

play21:10

books for this course.

play21:14

But those are not the only good texts

play21:18

about artificial intelligence.

play21:21

And if you want to dig deeper, it might be a good idea to go

play21:29

to the web and type in Aaron Sloman, S-L-O-M-A-N.

play21:38

And you'll get to his website, which is something like that.

play21:44

And Sloman is a sort of philosopher who can program.

play21:52

There are a handful of them in the world,

play21:55

and he has lots of interesting ideas

play22:00

that nobody's gotten to carry out.

play22:07

So I recommend.

play22:11

Who else is--

play22:13

Pat, do you ever recommend anyone else?

play22:16

PAT: No.

play22:20

PROFESSOR: What?

play22:24

I'm trying to think.

play22:34

I mean, if you're looking for philosophers,

play22:38

Dan Dennett has a lot of ideas.

play22:39

But Sloman is the only person, I'd say,

play22:44

is a sort of real professional philosopher, who

play22:49

tries to program, at least, some of his ideas.

play22:53

And he has successful students, who

play22:57

have made larger systems work.

play22:59

So if you get tired of me, and you ought to,

play23:04

then go look at this guy, and see who he recommends.

play23:11

OK, who has a good question to ask?

play23:15

AUDIENCE: So Marty, I'm talking about how we have

play23:18

a lot of words for emotions.

play23:20

Why can we only have one word for cause?

play23:23

PROFESSOR: It's a mystery, but I spent

play23:29

most of the couple of days making this list bigger.

play23:38

But these aren't-- you know, these are things that you do

play23:43

when you're thinking.

play23:43

You make analogies.

play23:48

If you have multiple goals, you try

play23:52

to pick the most important one.

play23:53

Or in some cases, if you have several goals,

play23:58

maybe you should try to achieve the easiest one,

play24:01

and there's a chance that it will lead you into what

play24:04

to do about the harder ones.

play24:06

But a lot of people think mostly in England

play24:16

that logic is a good way to do reasoning,

play24:20

and that's completely wrong.

play24:23

Because in logic, first of all, you can't do analogies at all,

play24:29

except at a very high level.

play24:30

It takes four or five nested quantifiers to say,

play24:34

A is to B as C is to which of the following five.

play24:41

So I've never seen anyone do analogical thinking using

play24:47

formalogic, first order or higher order predicate

play24:50

calculus.

play24:54

What's logic good for?

play24:56

Its great after you've solved a problem.

play24:59

Because then you can formalize what you did

play25:02

and see if some of the things you did weren't unnecessary.

play25:07

In other words, after you've got the solution

play25:09

to a problem, what you've got by going through a big search,

play25:14

you finally found a path from A to Z.

play25:17

And now, you can see if the assumptions that you

play25:22

had to make to bridge all these various little gaps

play25:26

were all essential or not.

play25:30

Yes?

play25:31

AUDIENCE: What kind of examples would you

play25:34

say that logic came to analogies?

play25:36

Like, well, water is [INAUDIBLE] containment, like why

play25:42

[INAUDIBLE]?

play25:46

PROFESSOR: Well, because you have

play25:49

to make a list of hypotheses, and then

play25:53

let me see if I can find Evans.

play25:56

The trouble is-- darn, Evans name is in a picture.

play26:03

And Word can't look inside its pictures.

play26:07

Can PowerPoint find words in its illustrations?

play26:14

Why don't I use PowerPoint?

play26:17

Because I've discovered that PowerPoint can't read pictures

play26:22

made by other programs in the Microsoft Word suite.

play26:29

The drawing program in Word is pretty good,

play26:33

and then there's an operation in Word,

play26:35

which will make a PowerPoint out of what you drew.

play26:40

And it's 25 years since Microsoft

play26:48

hasn't fixed the fatal errors that it makes when you do that.

play26:54

In other words, I don't think that the PowerPoint and Word

play26:57

people communicate.

play26:59

And they both make a lot of money,

play27:01

so that might be that might be the reason.

play27:08

Where was I?

play27:10

AUDIENCE: Why logic can't do [INAUDIBLE]..

play27:13

PROFESSOR: Well, you can do anything in logic,

play27:15

if you try hard enough, but A is to B

play27:20

as C is to X is a four part relation.

play27:24

And you'd need a whole pile of quantifiers,

play27:27

and how would you know what to do next?

play27:33

Yes?

play27:34

AUDIENCE: Talk a bit about the situation in which we are able

play27:38

to perform some sort of action, like really fluently and really

play27:42

well, but we cannot describe what we're doing.

play27:45

And the example I give is, say, I'm an expert African drummer

play27:50

from Africa, and I can make these really complicated

play27:53

rhythms.

play27:53

But if you asked me, what did you just do?

play27:55

I had no idea how to describe it.

play27:58

And in that case, do you think the person is capable of--

play28:03

I guess, do you think the person--

play28:05

we can say that the person understands this,

play28:07

even though they cannot explain it.

play28:10

PROFESSOR: Well, if you take an extreme form of that,

play28:17

you can't explain why you used any particular word

play28:21

for anything.

play28:22

There's no reason.

play28:26

It's remarkable how well people can do in everyday life

play28:30

to tell people how they got an idea.

play28:33

But when you look at it, it doesn't

play28:36

say how you would program a machine to do it.

play28:39

So there's something very peculiar about the idea that--

play28:46

it goes back to this idea that people have free will

play28:52

and so forth.

play28:53

Suppose, I say, look at this and say,

play29:00

this has a constriction at this point.

play29:03

Why did I say constriction?

play29:06

How do you get any--

play29:07

how do you decide what word to use for something?

play29:10

You have no idea, so it's a very general question.

play29:17

It's not clear that the different parts

play29:21

of the frontal lobes, which might have something

play29:26

to do with making plans and analyzing

play29:29

certain kinds of situations, have any access to what

play29:32

happens in the Broca or--

play29:37

what's the speech production area?

play29:42

Broca, and I'm trying to find the name of the other one.

play29:48

It's connected by a cable that's about a quarter inch thick.

play29:52

AUDIENCE: Is that the Wernicke?

play29:53

PROFESSOR: Wernicke, yeah.

play29:56

We have no idea how those work as far as I've never

play30:01

seen any publication in neuroscience that says,

play30:08

here's a theory of what happens in Wernicke's area.

play30:11

Have any of you ever seen one?

play30:14

What do those people think about it,

play30:17

what they'll tell you about?

play30:21

I was reading something, which said,

play30:22

it's going to be very hard to understand these areas.

play30:25

Because each neuron is connected to 100,000 little fibers.

play30:30

Well, some of them are.

play30:32

And I bet they don't do much, except sort of set

play30:35

the bias for some large collection of other neurons.

play30:45

But if you ask somebody, how did you think of such a word?

play30:49

They will tell you some story or anecdote.

play30:52

But they won't be able to describe

play30:53

some sort of procedure, which is, say,

play30:57

in terms of a language, like lisp.

play30:59

And say, I can't this and that, and I took the clutter

play31:03

of this in the car of that.

play31:04

And I put them in this register, and then I swapped that with--

play31:11

You don't see theories of how the mind works

play31:14

in psychology today.

play31:17

The only parts are they know a little bit

play31:20

about some aspects of vision, because you

play31:22

can track the paths of images from the retina

play31:25

to what's called the primary visual cortex.

play31:31

And people have been able to figure out

play31:33

what some of those cortical columns do.

play31:36

And if you go back to an animal, like the frog,

play31:40

then researchers, like [? Bitsey ?] and others,

play31:44

have figured out how the equivalent

play31:46

of the cerebellum in the frog.

play31:50

They've got almost the whole circuit

play31:52

of how when the frog sees a fly, it

play31:55

manages to turn its head that way,

play31:58

and stick its tongue out, and catch it.

play32:00

But in the case of a human, I've never

play32:02

seen any theory of how any person thinks of anything.

play32:09

There's artificial intelligence, which

play32:11

has high level theories of semantic representations.

play32:15

And there's neuroscience, which has

play32:18

good theories of some parts of locomotion

play32:21

and some parts of sensory systems.

play32:24

And to this day, there's nothing much in between.

play32:33

David, here, has decided to go from one to the other,

play32:37

and a former student of mine Bob Hearn

play32:41

has done a little bit on both.

play32:42

And I bet there are 20 or 30 people around the country, who

play32:46

are trying to bridge the gap between symbolic artificial

play32:50

intelligence and mappings of the nervous system.

play32:55

But it's very rare, and I don't know

play33:00

who you could ask to get support to work on a problem like that

play33:04

for five years.

play33:05

Yeah?

play33:06

AUDIENCE: So presumably to build a human life

play33:10

for artificial intelligence, we need

play33:11

to perfectly model our own intelligence, which

play33:15

means that we are the system.

play33:18

We ourself are the system that we're trying the understand.

play33:21

PROFESSOR: Well, it doesn't have to be exactly.

play33:23

I mean, people are different, and the typical person

play33:30

looks like they have 400 different brain

play33:34

centers doing slightly different things or very

play33:36

different things.

play33:38

And we have these examples.

play33:41

In many cases, if you lose a lot of your brain,

play33:46

you're very badly damaged.

play33:47

And in other cases, you recover and become just about as smart

play33:55

as you were.

play33:56

There's probably a few cases, where you got rid

play33:58

of something that was holding you back,

play34:00

but it's hard to prove that.

play34:06

We don't need a theory of how people work yet,

play34:10

and the nice thing about AI is that we could eventually

play34:17

get models, which are pretty good at solving

play34:20

what people call everyday common sense problems.

play34:24

And probably in many respects, they're not

play34:27

the way the human mind works, but it doesn't matter.

play34:31

But once you've got--

play34:33

if I had a program, which was pretty good at understanding

play34:36

why you can pull with a string but not push,

play34:42

then there's a fair chance you could say, well,

play34:45

that seems to resemble what people do.

play34:47

I'll do this few psychological experiments

play34:50

and see what's wrong with that theory and how to change it.

play34:55

So at some point, there'll be people making AI systems,

play35:00

comparing them to particular people,

play35:04

and trying to make them fit.

play35:06

The trouble is nowadays, it takes a few months,

play35:09

if you get a really good new idea, to program it.

play35:15

I think there's something wrong with programming languages,

play35:18

and what we need is a--

play35:21

we need a programming language, where the instructions describe

play35:27

goals and then subgoals.

play35:30

And then finally, you might say, well,

play35:32

let's represent this concept by a number or a semantic network

play35:37

of some sort.

play35:40

Yes?

play35:41

AUDIENCE: That idea of having a programming language where

play35:43

you define goals.

play35:44

PROFESSOR: Is there a goal oriented language?

play35:46

AUDIENCE: So there is kind of one.

play35:48

If you think about it, if you squint hard enough

play35:50

at something, like SQL, where you tell it here,

play35:55

I want to find the top 10 people in my database

play36:00

with this high value.

play36:02

And then you don't worry about how the system goes

play36:04

about doing that.

play36:05

In a sense, that's redefining your goal [INAUDIBLE]..

play36:08

But you got to switch a little bit.

play36:11

PROFESSOR: What's it called?

play36:13

AUDIENCE: SQL.

play36:14

PROFESSOR: SQL.

play36:14

AUDIENCE: [INAUDIBLE] database and curates it [INAUDIBLE]..

play36:19

PROFESSOR: Oh, right.

play36:20

Yes, I guess database query languages are on the track,

play36:25

but Wolfram Alpha seems to be better than I thought.

play36:31

Well, he was running it, and Steve Wolfram

play36:39

was giving this demo at a meeting we were at on Monday.

play36:44

And he'd say, well, maybe I'll just say this,

play36:48

and it always worked.

play36:51

So maybe either the language is better than I thought,

play36:54

or Wolfram is better than I thought or something.

play37:01

Remarkable guy.

play37:06

Yes?

play37:07

AUDIENCE: So I liked this example of you

play37:11

only remember a name after you've given up consciously

play37:15

trying to think about it.

play37:16

Do you think this is a matter of us being

play37:18

able to set up back our processes,

play37:20

and then there's either some delay.

play37:24

Like we give off- there's some delay

play37:26

in the process, where we don't have the ability to correctly

play37:28

terminate processes.

play37:30

Do you think this only works for memory,

play37:32

or could it work for other things?

play37:34

Like could I start an arithmetic operation,

play37:37

and then give up, and then it'll come to me later?

play37:40

PROFESSOR: Well, there's a lot of nice questions about things

play37:45

like that.

play37:46

How many processes can you run at once in your brain?

play37:50

And I was having a sort of argument the other day

play37:55

about music, and I was wondering if--

play38:06

I see a big difference between Bach

play38:08

and the composers who do counterpoint.

play38:15

Counterpoint, you usually have several versions

play38:19

of a very similar idea.

play38:21

Maybe there's one theme, and you have it playing.

play38:25

And then another voice comes in.

play38:27

And it has that theme upside down,

play38:29

or a variation of it, or in some cases, exactly the same.

play38:33

And then it's called a canon.

play38:37

So the tour de force in classical music

play38:41

is when you have two, or three, or four versions

play38:45

of the same thought going on at once at different times.

play38:50

And my feeling was that in popular music,

play38:53

or if you take a typical band, then there

play39:00

might be four people.

play39:02

And they're doing different things at the same time.

play39:05

Usually, not the same musical tunes.

play39:09

But there's a rhythm, and there's a tympani.

play39:14

And there's various instruments doing different things,

play39:17

but you don't have several doing the same thing.

play39:19

I might be wrong, and somebody said, well,

play39:24

some popular music has a lot of counterpoint.

play39:28

I'm just not familiar with it.

play39:30

But I think that's--

play39:32

if you're trying to solve a hard problem,

play39:35

it's fairly easy to look at the problem

play39:37

in several different ways.

play39:39

But what's hard is to look at it in several

play39:42

almost the same ways that are slightly different.

play39:45

Because probably, if you believe that the brain is

play39:49

made of agents, or resources, or whatever,

play39:53

you probably don't have duplicate copies of ones

play39:56

that do important things.

play39:58

Because that would take up too much real estate.

play40:02

Anyway, I might be completely wrong about jazz.

play40:06

Somebody, maybe they have just as

play40:12

complicated overlapping things as Bach

play40:16

and the contrapuntal composers did.

play40:24

Yeah?

play40:25

AUDIENCE: What is the ultimate goal

play40:26

of artificial intelligence?

play40:28

Is it some sort of application, or is it more philosophical?

play40:32

PROFESSOR: Oh, everyone has different goals or ones.

play40:36

AUDIENCE: In your opinion.

play40:39

PROFESSOR: I think we're going to need it,

play40:42

because the disaster that we're working our way toward

play40:47

is that people are going to live longer.

play40:50

And they'll become slightly less able,

play40:55

so we'll have billions of 200-year-old people

play40:59

who can barely get around.

play41:01

And there won't be enough people to import

play41:06

from underdeveloped countries to,

play41:10

or they won't be able to afford them.

play41:12

So we're going to have to have machines that take care of us.

play41:16

Of course, that's just a transient.

play41:18

Because at some point, then you'll

play41:19

download your brain into a machine and fix

play41:22

everything that's wrong.

play41:24

So we'll need robots for a few years or a few decades.

play41:28

And then we'll be them, and we won't need them anymore.

play41:34

But it's an important problem.

play41:36

What's going to happen in the next 100 years?

play41:40

You're going to have 20 billion 200-year-olds and nobody

play41:45

to take care of them, unless we get AI.

play41:57

Nobody seems particularly sad about that.

play42:03

How long-- oh, another anecdote.

play42:08

I was once giving a lecture and talking about people

play42:12

living a long time.

play42:15

And nobody in the audience seemed interested,

play42:18

and I'd say, well, suppose you could live 400 years.

play42:21

And most of the people--

play42:23

then I asked, what was the trouble?

play42:25

They said, wouldn't it be boring?

play42:28

So then I tried it, again, in a couple of other lectures.

play42:32

And if you ask a bunch of scientists, how would

play42:37

you like to live 400 hundreds years?

play42:40

Everyone says, yay, and you ask them why.

play42:45

And they say, well, I'm working on a problem

play42:47

that I might not have time to solve.

play42:49

But if I had 400 years, I bet I could get somewhere on it,

play42:55

and the other people don't have any goal.

play42:58

That's my cold blooded view of the typical non-scientist.

play43:06

There's nothing for them to do in the long run.

play43:11

Who can think of what should people do?

play43:13

What's your goal?

play43:16

How many of you want to live 400 years?

play43:20

Wow, there must be scientists here.

play43:26

Try it on some crowd and let me know what happens.

play43:30

Are people really afraid.

play43:31

Yeah?

play43:32

AUDIENCE: I think the differentiating factor is

play43:34

whether or not your 400 years is just

play43:37

going to be the repetition of 100 years experience,

play43:40

or if it'll start to like take off,

play43:42

then you'll start to learn better.

play43:45

You'll progress.

play43:46

PROFESSOR: Right.

play43:48

I've seen 30 issues of the Big Bang,

play43:54

and I don't look forward to the next one anymore.

play43:57

Because they're getting to be all the same.

play44:00

Although, it's the only thing on TV that has scientists.

play44:10

Seriously, I hardly read anything,

play44:13

except journals and science fiction.

play44:18

Yeah?

play44:19

AUDIENCE: What's the motivation to have robots

play44:22

take care of as we age as opposed to enhancing

play44:26

our own cognitive abilities, or our prosthetic body,

play44:30

or something more societiable?

play44:37

What's the joy of living, if you can't do anything,

play44:39

and somebody takes care of you?

play44:40

PROFESSOR: I can't think of any advantage,

play44:42

except that medicine isn't getting--

play44:47

you know, the age of unhandicapped people

play44:51

went up at one year every four since the late 1940s.

play44:57

So the lifespan is--

play44:59

so that's 60 years.

play45:00

So people are living 15 years longer on the average

play45:04

than they did when I was born or even more than that.

play45:10

But it's leveled off lately.

play45:13

Now I suspected you only have to fix a dozen genes,

play45:17

or who knows?

play45:18

Nobody really has a good estimate,

play45:22

but you can probably double the lifespan, if you could fix.

play45:27

Nobody knows, but maybe there's just a dozen processes

play45:30

that would fix a lot of things.

play45:33

And then you could live longer without deteriorating,

play45:36

and lots of people might get bored.

play45:40

But they'll self select.

play45:46

I don't know.

play45:48

What's your answer?

play45:57

AUDIENCE: I feel that AI is more--

play46:02

the goal is not to help take care of people,

play46:05

but to complement what we already have to entertain us.

play46:10

PROFESSOR: You could also look at them as our descendants.

play46:13

And we will have them replace us and just as a lot of people

play46:23

consider their children to be the next generation of them.

play46:29

And I know a lot of people who don't, so it's not a universal.

play46:42

What's the point of anything?

play46:43

I don't want to get in--

play46:49

we might be the only intelligent life in the universe.

play46:52

And in that case, it's very important

play46:57

that we solve all our problems and make sure

play47:00

that something intelligent persists.

play47:03

I think Carl Sagan had some argument of that sort.

play47:09

If you were sure that there were lots of others,

play47:13

then it wouldn't seem so important.

play47:24

Who is the new Carl Sagan?

play47:28

Is there any?

play47:31

Is there a public scientist?

play47:32

AUDIENCE: [INAUDIBLE].

play47:34

PROFESSOR: Who?

play47:36

AUDIENCE: He's the guy who is on Nova all the time.

play47:41

PROFESSOR: Oh, Tyson?

play47:45

AUDIENCE: Bryan Green.

play47:46

PROFESSOR: Bryan Green, he's very good.

play47:49

Tyson is the astrophysicist.

play47:55

Bryan Green is a great actor.

play47:57

He's quite impressive.

play48:03

Yeah?

play48:03

AUDIENCE: When would you say a routine has sense of self?

play48:08

Like when you think there's something

play48:10

that like a self inside us, partly, because there's

play48:15

some processes [INAUDIBLE].

play48:21

But when would you say [INAUDIBLE]??

play48:26

PROFESSOR: Well, I think that's a funny question.

play48:28

Because if we're programming it, we can make sure that

play48:33

the machine has a very good abstract,

play48:38

but correct model of how it works, which people don't.

play48:43

So people have a sense of self, but it's only a sense of self.

play48:47

And it's just plain wrong in almost every respect.

play48:54

So it's a really funny question.

play48:56

Because when you make a machine that really

play49:01

has a good useful representation of what it is and how it works,

play49:07

it might be quite different, have different attitudes

play49:10

than a person does.

play49:12

Like you might not consider itself very valuable

play49:14

and say, oh, I could make something

play49:18

that's even better than me and jump into that.

play49:22

So it wouldn't have the--

play49:24

it might not have any self protective reaction.

play49:28

Because if you could improve yourself,

play49:32

then you don't want not to.

play49:34

Whereas we're in a state, where there's nothing much

play49:37

we could do, except try to keep living,

play49:39

and we don't have any alternative.

play49:45

It's a stupid thing to say.

play49:53

I can't imagine getting tired of living, but lots of people do.

play50:01

Yeah?

play50:02

AUDIENCE: What do you think about creative thinking

play50:04

as a way of thinking?

play50:06

And where does this thinking completely

play50:07

come from or anything that comes after?

play50:10

PROFESSOR: I had a little section about that somewhere

play50:13

that I wrote, which was the difference between artists

play50:16

and scientists or engineers.

play50:19

And engineers have a very nice situation,

play50:26

because they know what they want.

play50:29

Because somebody's ordered them to make a--

play50:33

in the last month, three times, I've

play50:36

walked away from my computer.

play50:43

How many of you have a Mac with the magnetic thing?

play50:50

And three times, I pulled it by tripping on this,

play50:54

and it fell to the floor and didn't break.

play50:57

And I've had Macs for 20 odd years or since 1980--

play51:02

when did they start?

play51:07

30 years, and they have the regular jack power supply

play51:14

in the old days.

play51:15

And I don't remember.

play51:16

And usually, when you pull the cord, it comes out.

play51:20

Here is this cord that Steve Jobs and everybody designed

play51:24

very carefully, so that when you pull it,

play51:26

nothing bad would happen.

play51:32

But it does.

play51:36

How do you account for that?

play51:37

AUDIENCE: It used to be better with the old plugs were

play51:42

perpendicular to the plus, and now it's kind of--

play51:47

PROFESSOR: Well, it's quite a wide angle.

play51:48

AUDIENCE: Right, so it works at a certain angle.

play51:53

The cable now instead of naturally lining that area

play51:57

actually naturally lies in the area where it doesn't work.

play52:00

PROFESSOR: Well, what it needs is a little ramp,

play52:02

so that it would slide out.

play52:05

I mean, it would only take a minute to file it down,

play52:07

so that it would slide out.

play52:09

AUDIENCE: Right.

play52:10

PROFESSOR: But they didn't.

play52:13

I forget why I mentioned that, but--

play52:17

AUDIENCE: [INAUDIBLE].

play52:20

PROFESSOR: Right, so what's the term doing

play52:22

an artist and an engineer?

play52:23

Well, when you do a painting, it seems to me,

play52:27

if you're already good at painting,

play52:29

then 9/10ths of the problem is, what should I paint?

play52:34

So you can think of an artist as 10% skill

play52:38

and 90% trying to figure out what the problem is to solve.

play52:43

Whereas for the engineer, somebody's told him what to do,

play52:47

make a better cabled connector.

play52:50

So he's going to spend 90% of his time actually solving

play52:54

the problem and only 10% of the time trying to decide

play52:59

what problem to solve.

play53:01

So I don't see any difference between artists and engineers,

play53:05

except that the artist has more problems

play53:10

to solve than he could possibly solve

play53:13

and usually ends up by picking a really dumb one,

play53:16

like let's have a Saint and three angels.

play53:19

Where will I put the third angel?

play53:23

That's the engineering part.

play53:30

It's just improvising, so to me, the media lab makes sense.

play53:38

The artists or semi artists and the scientists

play53:42

are doing almost the same thing.

play53:44

And if you look at the more arty people,

play53:48

they're a little more concerned with human social relations

play53:51

and this and that.

play53:53

And others are more concerned with very technical,

play53:57

specific aspects of signal processing or semantic

play54:01

representations and so on.

play54:05

So I don't see much difference between the arts

play54:09

and the sciences.

play54:12

And then, of course, the great moments

play54:15

are when you run into people, like Leonardo and Michelangelo,

play54:18

who get some idea that requires a great new technical

play54:24

innovation that nobody has ever done.

play54:27

And it's hard to separate them.

play54:30

I think there's some place, where Leonardo realizes

play54:34

that the lens in the eye would mean that the image is upside

play54:39

down on the retina, and he couldn't stand that.

play54:42

So there's a diagram he has, where

play54:44

the cornea is curved enough to invert the image,

play54:48

and then the lens inverts it back again,

play54:53

which is contrary to fact.

play54:55

But he has a sketch showing that he was worried about,

play55:01

if the image were upside down on the retina,

play55:05

wouldn't things look upside down?

play55:14

AUDIENCE: [INAUDIBLE] question.

play55:17

Did you ever heard about [INAUDIBLE] temporal memory,

play55:23

like--

play55:24

PROFESSOR: Temporal?

play55:26

AUDIENCE: Temporal memory, like there

play55:29

is a system that [INAUDIBLE] at the end of this each year

play55:45

on it.

play55:46

And there's some research.

play55:47

They have a paper on it.

play55:50

PROFESSOR: Well, I'm not sure what--

play55:54

AUDIENCE: This is Jeff Hawkins project?

play55:56

I don't know.

play55:57

Yeah, it's Jeff Hawkins.

play55:59

PROFESSOR: I haven't heard.

play56:00

About 10 years ago, he said--

play56:02

Hawkins?

play56:03

AUDIENCE: Yeah, Hawkins.

play56:04

PROFESSOR: Yeah, well, he was talking about 10 years ago,

play56:07

how great it was, and I haven't heard a word of any progress.

play56:11

Is there some?

play56:14

Has anybody heard-- there's a couple of books about it.

play56:19

But I've never seen any claim of that it works.

play56:24

They wrote a ferocious review of the Society of Mind,

play56:29

which came out in 1986.

play56:32

And the Hawkins group existed then

play56:36

and had this talk about a hierarchical memory system.

play56:43

AUDIENCE: [INAUDIBLE].

play56:46

PROFESSOR: As far as I can tell, it's all a bluff.

play56:48

Nothing happened.

play56:50

I've never seen a report that they have a machine, which

play56:53

solved a problem.

play56:56

Let me know if you find one, because--

play57:03

oh well.

play57:05

Hawkins got really mad at me for pointing this out,

play57:09

but I was really mad at him for having four of his assistants

play57:15

write a bad book review of my book.

play57:17

So I hope we were even.

play57:25

If anybody can find out whether--

play57:27

I forget what it's called.

play57:28

Do remember its name?

play57:32

AUDIENCE: [INAUDIBLE].

play57:36

PROFESSOR: Well, let's find out if it can do anything yet.

play57:41

Hawkins is wealthy enough to support it for a long time,

play57:45

so it should be good by now.

play57:54

Yes?

play57:55

AUDIENCE: Do you think that's going to solve the problem?

play57:58

People first start out with some sort of classification in their

play58:02

of the kind of problem it is, or is that not necessary?

play58:07

PROFESSOR: Yes, well, there's this huge book

play58:16

called Human Problem Solving, which

play58:21

I don't know how many of you know

play58:23

the names of Newell and Simon.

play58:25

Originally, it was Newell, Shaw, and Simon.

play58:31

Believe it or not, in the late 1950s,

play58:34

they did some of the first really productive AI research.

play58:40

And then, I think, in 1970, so that's

play58:47

sort of after 12 years of discovering interesting things.

play58:54

Their main discovery was the gadget

play58:58

that they called GPS, which is not global positioning

play59:01

satellite, but general problem solver.

play59:05

And you can look it up in the index of my book,

play59:11

and there's a sort of one or two page description.

play59:14

But if you ever get some spare time,

play59:18

search the web for their early paper by Newell and Simon

play59:22

on how GPS worked.

play59:24

Because it's really fascinating.

play59:26

What it did is it looked at a problem,

play59:28

and found some features of it, and then looked up in a table

play59:32

saying that, if there's this difference between what

play59:36

you have and what you want, use such and such a method.

play59:41

So it was sort of what I called it.

play59:43

I renamed it a difference engine as a sort of joke,

play59:46

because the first computer in history

play59:49

was the one called the difference engine.

play59:54

But it was for predicting tides and things.

play59:58

Anyway, they did some beautiful work.

play60:02

And there's this big book, which I think is about 1970,

play60:05

called Human Problem Solving.

play60:08

And what they did is got some people to solve problems,

play60:14

and they trained the people to talk while they're

play60:16

solving the problem.

play60:18

So some of them were a little cryptograms,

play60:20

like if each letter stands for a digit, I've forgotten it.

play60:36

Pat, do you remember the name, one of those problems?

play60:40

John plus Joe--

play60:42

John plus Jane equals Robert or something.

play60:47

I'm sure that has no solution, but those

play60:51

are called cryptarithmetic.

play60:54

So they had dozens or hundreds of people

play60:57

who would be trained to talk aloud while they're solving

play61:00

little puzzles like that.

play61:02

And then what they did was look at exactly what the people said

play61:08

and how long they took.

play61:09

And in some cases, where they move their eyes,

play61:13

they had an eye tracking machine.

play61:15

And then they wrote programs that

play61:18

showed how this guy solved a couple

play61:21

of these cryptarithmetic problems.

play61:23

Then they ran the program on a new one.

play61:26

And in some rare cases, it actually

play61:27

solved the other problem.

play61:31

So this is a book, which looks at human behavior

play61:36

and makes a theory of what it's doing.

play61:38

And the output is a rule based system,

play61:41

so it's not a very exciting theory.

play61:44

But there's never been anything like it in--

play61:50

you know, it was like Pavlov discovering conditioned

play61:53

reflexes for rats or dogs.

play61:56

And Newell and Simon are discovering some rather higher

play62:00

level almost a Rodney Brooks like system

play62:07

for how humans solve some problems that most people find

play62:10

pretty hard.

play62:12

Anyway, what there hasn't been is much--

play62:24

I don't know of any follow-up.

play62:25

They spent years perfecting those experiments,

play62:28

and writing about--

play62:30

[AUDIO OUT]

play62:32

--results.

play62:33

And anybody know anything like that?

play62:40

What psychologists are trying to make real models of real people

play62:45

solving [INAUDIBLE] problems.

play62:51

[INAUDIBLE]

play62:52

AUDIENCE: Your mic [? is off. ?]

play63:02

PROFESSOR: It has a green light.

play63:03

AUDIENCE: It has a green light, but the switch was up.

play63:06

PROFESSOR: Boo.

play63:07

Oh, [INAUDIBLE].

play63:08

AUDIENCE: We're all set now.

play63:09

PROFESSOR: [CHUCKLES]

play63:12

Yes.

play63:13

AUDIENCE: Did that [INAUDIBLE] study

play63:15

try to see when a person gave up on a particular problem-solving

play63:18

method [INAUDIBLE] how they switched-- in other words,

play63:21

when they switched to [INAUDIBLE]??

play63:24

PROFESSOR: It has inexplicable points

play63:27

at which the person suddenly gives up

play63:29

on that representation.

play63:31

And he says, oh, well, I guess R must be 3.

play63:40

Did I erase?

play63:41

Well.

play63:43

Yes, it's got episodes, and they can't account for the--

play63:47

you have these little jerks in the script

play63:51

where the model changes.

play63:53

And-- [COUGHS] sorry.

play63:57

And they announced those to be mysteries,

play64:00

and say, here's a place where the person has decided

play64:03

the strategy isn't working and starts over,

play64:07

or is changing something.

play64:10

The amazing part is that their model sometimes

play64:14

fits what the person says.

play64:16

For 50 or even 100 steps, the guy's saying,

play64:20

oh, I think z must be 2 and p must be 7.

play64:26

And that means p plus z is 9, and I wonder what's 9.

play64:31

And so their model fits for very long strings,

play64:38

maybe two minutes of the person mumbling to themselves.

play64:44

And then it breaks, and then there's another sequence.

play64:50

So Newell actually spent more than a

play64:54

year after doing it verbally, at tracking the person's eye

play65:01

motions, and trying to correlate the person's eye

play65:05

motions with what the person was talking about.

play65:09

And guess what?

play65:11

None.

play65:13

AUDIENCE: [CHUCKLING]

play65:15

PROFESSOR: It was almost as though you look at something,

play65:19

and then to think about it, you look away.

play65:24

Newell was quite distressed, because he spent about a year

play65:28

crawling over this data trying to figure out

play65:32

what kinds of mental events caused the eyes to change

play65:35

what they were looking at.

play65:37

But when the problem got hard, you

play65:39

would look at a blank part of the thing

play65:41

more often than the place where the problem turned up.

play65:47

So conclusion, that didn't work.

play65:53

When I was a very young student in college,

play65:57

I had a friend named Marcus Singer, who

play66:02

was trying to figure out how the nerve in the forelimb of a frog

play66:07

worked.

play66:09

And so he was operating on tadpoles.

play66:12

And he spent about six weeks moving

play66:17

this sciatic nerve from the leg up to the arm of this tadpole.

play66:25

And then they all got some fungus and died.

play66:32

So I said, what are you going to do?

play66:33

And he said, well, I guess I'll have to do it again.

play66:39

And I switched from biology to mathematics.

play66:42

AUDIENCE: [CHUCKLING]

play66:52

PROFESSOR: But in fact, he discovered the growth hormone

play66:56

that he thought came from the nerve and made the--

play67:01

if you cut off the limb bud of a tadpole, it'll grow another one

play67:05

and grow a whole--

play67:07

it was a newt, I'm sorry.

play67:08

It's salamander.

play67:11

It'll grow a new hand.

play67:13

If you wait till it's got a substantial hand,

play67:16

it won't grow a new one.

play67:18

But he discovered the hormone that makes it do that.

play67:23

Yeah.

play67:23

AUDIENCE: One of the questions from the homework that

play67:26

relates to problem-solving.

play67:29

A common theme is having multiple ways

play67:31

to react to the same problem.

play67:32

But how do we choose which options

play67:34

to add as possible reactions to the same problem?

play67:36

PROFESSOR: Oh.

play67:38

So we have a whole lot of if-thens,

play67:40

and we have to choose which if.

play67:44

I don't think I have a good theory of that.

play67:50

Yes, if you have a huge rule-based system and they're--

play67:53

what does Randy Davis do?

play67:57

What if you have a rule-based system, and a whole lot of ifs

play68:02

fit the condition?

play68:04

Do you just take the one that's most often worked?

play68:08

Or if nothing seems to be working, do you--

play68:13

you certainly don't want to keep trying the same one.

play68:21

I think I mentioned Doug [? Lenat's ?] rule.

play68:23

Some people will assign probabilities to things,

play68:27

to behaviors, and then pick the way

play68:32

to react in proportional to the probability

play68:35

that that thing has worked in the past.

play68:38

And Doug [? Lenat ?] thought of doing that,

play68:42

but instead, he just put the things in a list.

play68:45

And whenever a hypothesis worked better than another one,

play68:49

he would raise it, push it toward the front of the list.

play68:54

And then whenever there was a choice, he would pick--

play68:58

of all the rules that fit, he would pick the one

play69:01

at the top of the list.

play69:02

And if that didn't work, it would get demoted.

play69:05

So that's when I became an anti-probability person.

play69:13

That is, if just sorting the things

play69:16

on a list worked pretty well, our probability's

play69:21

going to do much better.

play69:23

No, because if you do probability matching,

play69:27

you're worse off than--

play69:31

than what?

play69:34

AUDIENCE: [INAUDIBLE]

play69:35

PROFESSOR: Ray Solomonoff discovered

play69:37

that if you have a set of probabilities

play69:42

that something will work, and you

play69:46

have no memory, so that each time you come and try the--

play69:53

I think I mentioned that the other day,

play69:54

but it's worth emphasizing, because nobody in the world

play69:58

seems to know it.

play70:04

Suppose you have a list of things,

play70:06

p equals this, or that, or that.

play70:16

In other words, suppose there's 100 boxes here,

play70:20

and one of them has a gold brick in it, and the others don't.

play70:30

And so for each box, suppose the probability is 0.9

play70:37

that this one has the gold brick, and this one as 0.01.

play70:45

And this has 0.01.

play70:49

Let's see, how many of them--

play70:51

so there's 10 of these.

play70:53

That makes--

play71:00

Now, what should you do?

play71:02

Suppose you're allowed to keep choosing a box,

play71:08

and you want to get your gold brick as soon as possible.

play71:13

What's the smart thing to do?

play71:15

Should you-- but you have no memory.

play71:22

Maybe the gold brick is decreasing in value,

play71:24

I don't care.

play71:25

But so should you keep trying 0.9 if you have no memory?

play71:32

Of course not.

play71:34

Because if you don't get it the first time,

play71:36

you'll never get it.

play71:39

Whereas if you tried them at random each time, then

play71:44

you'd have 0.9 chance of getting it, so in two trials,

play71:50

you'd have--

play71:52

what am I saying?

play71:53

In 100 trials, you're pretty sure to get it,

play71:56

but in [? e-hundred ?] trials, almost certain.

play72:03

So if you don't have any memory, then probability matching

play72:08

is not a good idea.

play72:10

Certainly, picking the highest probability

play72:13

is not a good idea, because if you

play72:16

don't get it the first trial, you'll never get it.

play72:20

If you keep using the probabilities at--

play72:26

what am I saying?

play72:27

Anyway, what do you think is the best thing to do?

play72:30

It's to take the square roots of those probabilities,

play72:34

and then divide them by the sum of the square roots

play72:37

so it adds up to 1.

play72:41

So a lot of psychologists design experiments until they

play72:45

get the [? rat ?] to match the probability.

play72:49

And then they publish it.

play72:51

Sort of like the--

play72:57

but if the animal is optimal and doesn't have much memory,

play73:00

then it shouldn't match the probability of the unknown.

play73:04

It should-- end of story.

play73:13

Every now and then, I search every few years

play73:18

to see if anybody has noticed this thing, which--

play73:23

and I've never found it on the web.

play73:38

Yeah.

play73:39

AUDIENCE: So earlier in the class,

play73:41

you mentioned that the rule-based methods didn't work,

play73:44

and that several other methods were

play73:46

tried between the [INAUDIBLE] [? immunities. ?]

play73:48

Could you go into a bit about what these other methods were

play73:52

that have been tried?

play73:54

PROFESSOR: I don't mean to say they don't work.

play73:57

Rule-based methods are great for some kinds of problems.

play74:02

So most systems make money, and if you're

play74:10

trying to make hotel reservations

play74:16

and things, this business of rule-based systems,

play74:24

it has a nice history.

play74:26

A couple of AI researchers, really, notably Ed Feigenbaum,

play74:31

who was a student of Newell and Simon,

play74:37

started a company for making rule-based systems.

play74:43

And company did pretty well for a while,

play74:49

and they maintained that only an expert

play74:52

in artificial intelligence could be really good at making

play74:56

rule-based systems.

play74:57

And so they had a lot of customers,

play74:59

and quite a bit of success for a year or two.

play75:03

And then some people at Arthur D. Little

play75:06

said, oh, we can do that.

play75:08

And they made some systems that worked fine.

play75:12

And the market disappeared, because it turned out

play75:17

that you didn't have to be good at anything

play75:20

in particular to make rule-based systems work.

play75:26

But for doing harder problems, like translating

play75:30

from one language to another, you really

play75:35

needed to have more structure, and you couldn't just

play75:39

take the probabilities of words being in a sentence,

play75:42

but you had to look for diagrams and trigrams,

play75:45

and have some grammar theory, and so forth.

play75:54

But generally, if you have a ordinary data-processing

play76:00

problem, try a rule-based system first,

play76:03

because if you understand what's going on,

play76:08

it's a good chance you'll get things to work.

play76:11

I'm sure that's what the Hawkins thing started out as.

play76:29

I don't have any questions.

play76:48

AUDIENCE: Could I ask another one for the homeworks?

play76:51

PROFESSOR: Sure.

play76:52

AUDIENCE: OK.

play76:55

Computers and machines can use relatively few

play76:57

electronic components to run a batch of different type

play77:00

of thought operations.

play77:02

All that changes is data over which the operation runs.

play77:04

In the [? critics ?] [? lecter ?] model,

play77:06

are resources different bundles of data or different physical

play77:09

parts of the brain?

play77:09

PROFESSOR: Which model?

play77:11

AUDIENCE: The [? critics ?] [? lecter ?] model.

play77:15

PROFESSOR: Oh.

play77:16

Actually, I've never seen a large-scale theory of how

play77:30

the brain connects its--

play77:36

there doesn't seem to be a global model anywhere.

play77:40

Anybody read any neuroscience books lately?

play77:46

AUDIENCE: [CHUCKLING]

play77:49

PROFESSOR: I mean, I just don't know of any big diagrams.

play78:02

Here's this wonderful behavioral diagram.

play78:05

So how many of you have run across the word "ethology?"

play78:15

Just a few.

play78:17

There's a branch of the psychology

play78:20

of animals, which is--

play78:24

AUDIENCE: [CHUCKLING]

play78:26

PROFESSOR: Thanks.

play78:28

Which is called ethology.

play78:31

And it's the study of instinctive behavior.

play78:39

And the most famous people in that field--

play78:44

who?

play78:45

Well, Niko Tinbergen and Konrad Lorenz are the most famous.

play78:52

I've just lost the name of the guy around the 1900

play78:59

who wrote a lot about the behavior of ants.

play79:04

Anybody ring a bell?

play79:09

So he was the first ethologist.

play79:11

And these people don't study learning because it's hard to--

play79:16

I don't know why.

play79:18

So they're studying instinctive behavior,

play79:20

which is, what are the things that all fish do

play79:24

of a certain species?

play79:26

And you get these big diagrams.

play79:43

This is from a little book which you really should read

play79:46

called The Study of Instinct.

play80:02

And it's a beautiful book.

play80:06

And if that's not enough, then there's

play80:10

a two-volume similar book by Konrad Lorenz,

play80:15

who was a Austrian researcher.

play80:23

They did a lot of stuff together, these two people.

play80:28

And it's full of diagrams showing the main behaviors

play80:34

that they were able to observe of various low-cost animals.

play80:43

I think I mentioned that I had some fish,

play80:46

and I watched the fish tanks, what

play80:50

they were doing for a very long time,

play80:54

and came to no conclusions at all.

play80:57

And when I finally read Tinbergen and Lorenz,

play81:03

I realized [? that ?] just had never occurred to me

play81:08

to guess what to look for.

play81:14

My favorite one was that whenever a fire engine went by,

play81:20

Lorenz's sticklebacks, the male sticklebags

play81:23

would go crazy and look for a female.

play81:26

Because when the female's in heat, or whatever it's called--

play81:29

estrus-- the lower abdomen turns red.

play81:36

I think fire engines have turned yellow recently,

play81:38

so I don't know what the sticklebacks do about that.

play81:48

So if you're interested in AI, you really

play81:52

should look at at least one of these people,

play81:56

because that's the first appearance

play81:59

of rule-based systems in great detail in psychology.

play82:05

There weren't any computers yet.

play82:16

There must be 20 questions left.

play82:22

Yeah.

play82:22

AUDIENCE: While we're in the topic of ethology,

play82:25

so I know that early on, people were kind of--

play82:31

they were careful not to apply ethology

play82:34

to humans until about '60s EO Wilson with sociobiology.

play82:42

So I was wondering about your opinion on that,

play82:44

and maybe you have anecdotes on [INAUDIBLE]

play82:46

pretty controversial around this area especially.

play82:49

PROFESSOR: Oh, I don't know.

play82:53

I sort of grew up with Ed Wilson because we

play82:56

had the same fellowship at Harvard for three years.

play82:59

But he was almost never there, because he

play83:02

was out in the jungle in some little telephone booth

play83:06

watching the birds, or bees, or--

play83:13

he also had a 26-year-old ant.

play83:17

Aunt, not ant.

play83:19

Ant.

play83:25

A-N-T.

play83:30

I'm not sure what the controversy would have been,

play83:32

but of course, there would be humanists who would

play83:38

say people aren't animals, but.

play83:45

But then what the devil are they?

play83:50

Why aren't they better than the--

play83:51

[CHUCKLES]

play83:57

You've got to read this.

play83:58

It's a fairly short book.

play84:00

And you'll never see an animal as the same

play84:03

again, because I swear, you start

play84:08

to notice all these little things.

play84:10

You're probably wrong, but you start

play84:14

picking up little pieces of behavior,

play84:16

and trying to figure out what part of the instinct system

play84:21

is it.

play84:28

Lorenz was particularly-- I think

play84:31

in chapter 2 of the emotion machine,

play84:35

I have some quotes from these guys.

play84:38

And Lorenz was particularly interested in how animals

play84:47

got attached to their parents--

play84:49

that is, for those animals that do

play84:52

get attached to their parents.

play84:53

Like alligator babies live in the alligator's mouth

play84:58

for quite a while.

play85:01

It's a good, safe place.

play85:05

And Lorenz would catch birds just when they're hatching.

play85:16

And within the first day or so, some baby birds

play85:20

get attached to whatever large moving object is nearby.

play85:26

And that was often Konrad Lorenz, rather than

play85:30

the bird's mother, who is supposed

play85:33

to be sitting on the egg when it hatches,

play85:35

and the bird gets attached to the mother.

play85:38

Most birds do, because they have to stay around and get fed.

play85:44

So it is said that wherever Lorenz went in Vienna,

play85:52

there were some ducks or whatever--

play85:54

birds that had gotten imprinted on him would come out

play85:58

of the sky and land on his shoulder, and no one else.

play86:05

And he has various theories of how they recognize him.

play86:10

But you could do that too.

play86:22

Anyway, that was quite a field, this thing called ethology.

play86:26

And between 1920 and 1950--

play86:30

1930, I guess, 1950--

play86:33

there were lots of people studying

play86:35

the behavior of animals.

play86:36

And Ed Wilson is probably the most well-known successor

play86:45

to Lorenz and Tinbergen. And I think he just wrote a book.

play86:50

Has anybody seen it?

play86:55

He has a huge book called Sociobiology,

play86:58

which is too heavy to read.

play87:11

I've run out of things.

play87:14

Yes.

play87:15

AUDIENCE: Still thinking about the question [INAUDIBLE]..

play87:19

[INAUDIBLE],, The Society of Mind,

play87:22

ideas in that book, [INAUDIBLE] the machinery from it.

play87:29

What would the initial state of the machinery be [INAUDIBLE]

play87:32

start something?

play87:32

Is that dictated by the goals given to it?

play87:36

And by state, I mean the different agents, the resources

play87:39

they have access to.

play87:40

What would that initial state look like?

play87:44

PROFESSOR: He's asking if you made a model of the program

play87:50

to Society of Mind architecture, what would you

play87:53

put in it to start with?

play87:55

I never thought about that.

play87:57

Great question.

play87:58

I guess it depends whether you wanted

play88:00

to be a person, or a marmoset, or chicken, or something.

play88:10

Are there some animals that don't learn anything?

play88:13

Must be.

play88:15

What do the ones that Sydney Brenner studied?

play88:19

AUDIENCE: C. elegans?

play88:20

They [? learned ?] very simple associations.

play88:24

PROFESSOR: The little worms?

play88:25

AUDIENCE: Mm-hmm.

play88:31

PROFESSOR: There was a rumor that if you fed them RNA--

play88:36

was it them or was it some slightly higher animal?

play88:41

AUDIENCE: It was worms.

play88:42

PROFESSOR: What?

play88:43

AUDIENCE: RNA interference.

play88:43

Is that what you're talking about?

play88:45

Yeah.

play88:46

PROFESSOR: There was one that if you taught a worm

play88:48

to turn left when there was a bright light, or right,

play88:54

and put some of its RNA into another worm,

play88:59

that worm would copy that reaction even

play89:03

though it hadn't been trained.

play89:06

And this was--

play89:07

AUDIENCE: That wasn't worms.

play89:08

That was slugs.

play89:10

PROFESSOR: Slugs.

play89:11

AUDIENCE: I think it was [INAUDIBLE] replace

play89:13

the [INAUDIBLE] or something.

play89:14

AUDIENCE: Some little snail-like thing.

play89:18

And nobody was ever able to replicate it.

play89:21

So that rumor spread around the world quite happily,

play89:25

and there was a great science fiction story--

play89:31

I'm trying to remember--

play89:35

in which somebody got to eat some alien's RNA

play89:42

and got magical powers.

play89:43

AUDIENCE: [CHUCKLING]

play89:46

PROFESSOR: I think it's Larry Niven, who

play89:48

is wonderful at taking little scientific ideas

play89:55

and making a novel out of them.

play90:00

And his wife Marilyn was a undergraduate here.

play90:08

So she introduced me to Larry Niven, and--

play90:20

I once gave a lecture and he wrote it up.

play90:23

It was one of the big thrills, because Niven

play90:28

is one of my heroes.

play90:29

Imagine writing a book with a good idea in every paragraph.

play90:33

AUDIENCE: [CHUCKLING]

play90:36

Vernor Vinge, and Larry Niven, and Frederik Pohl

play90:44

seem to be able to do that.

play90:47

Or at least on every page.

play90:49

I don't know about every paragraph.

play90:52

Yeah.

play90:52

AUDIENCE: To follow up on that question,

play90:55

it seems to me that you almost were

play90:57

saying that if this machinery exists,

play91:00

the difference between these sort of animals

play91:04

would be in [INAUDIBLE].

play91:05

And I think on [INAUDIBLE],, we can

play91:07

create like a chicken or a human [INAUDIBLE]..

play91:10

PROFESSOR: Well, no.

play91:17

I don't think that most animals have scripts.

play91:22

Some might, but I'd say that--

play91:34

I don't know where most animals are,

play91:36

but I sort of make these six levels,

play91:41

and I'd say that none of the animals

play91:43

have this top self-reflective layer except, for all we know,

play91:48

dolphins, and chimpanzees, and whatever.

play91:56

It would be nice to know more about octopuses,

play91:58

because they do so much wonderful things

play92:03

with their eight legs.

play92:08

How does it manage?

play92:11

Have you seen pictures of an octopus picking up a shell,

play92:15

and walking to some quiet place, and it's got--

play92:20

there's some movies of this on the web.

play92:24

And then it drops the shell and climbs under it and disappears.

play92:31

It's hard to imagine programming a robot to do that.

play92:39

Yeah.

play92:40

AUDIENCE: So I've noticed, both in your books and in lecture,

play92:43

a lot of your models and diagrams

play92:46

seem to have very hierarchical structure to them.

play92:48

But as you [INAUDIBLE] in your book and other places,

play92:53

passing between [INAUDIBLE] feedback and self-reference

play92:55

are all very important [INAUDIBLE]..

play92:57

So I'm curious if you can discuss

play92:59

some of the uses of these very hierarchical models, why you

play93:03

represented so many things that way

play93:05

instead of [INAUDIBLE] theorem.

play93:07

PROFESSOR: Well, it's probably very hard to debug things that

play93:10

aren't.

play93:13

So we need a meta theory.

play93:17

One thing is that, for example, it

play93:22

looks like that all neurons are almost the same.

play93:26

Now, there's lots of difference in geometric features of them,

play93:31

but they all use the same one or two transmitters,

play93:35

and every now and then, you run across people saying,

play93:44

oh, neurons are incredibly complicated.

play93:47

They have 100,000 connections.

play93:50

You can find it if you just look up "neuron" on the web

play93:56

and get these essays explaining that nobody will ever

play94:01

understand them, because typically,

play94:03

a neuron is connected to 100,000 others, and blah, blah, blah.

play94:07

So it must be something inside the neuron that

play94:09

figures out all this stuff.

play94:12

As far as I can see, it looks out almost the opposite.

play94:15

Namely, probably the neuron hasn't

play94:18

changed for half a billion years very much,

play94:22

except in superficial ways in which it grows.

play94:28

Because if you changed any of the genes controlling

play94:32

its metabolism or the way it propagates impulses,

play94:41

then the animal would die before it was born.

play94:48

And so you can't make--

play94:52

that's why the embryology of all mammals is almost identical.

play94:56

You can't make a change at that level after the first--

play95:03

before the-- you can't make changes

play95:07

before the first generations of cell divisions,

play95:13

or everything would be clobbered.

play95:15

The architecture would be all screwed up.

play95:18

So I suspect that the people who say, well,

play95:21

maybe the important memories of a neuron

play95:23

are inside it, because there's so many fibers and things.

play95:28

I bet it's sort of like saying the important memory

play95:32

in a computer is in the arsenic and phosphorus

play95:35

atoms of the semiconductor.

play95:38

So I think things have to be hierarchical in evolution,

play95:43

because if you're building later stuff on earlier stuff, then

play95:49

it's very hard to make any changes in the earlier stuff.

play95:53

So as far as I know, the neurons in sea anemones

play95:57

are almost identical to the neurons in mammals,

play96:01

except for the later stages of growth,

play96:07

and the way the fibers ramify, and--

play96:13

who knows, but there are many people

play96:18

who want to find the secret of the brain

play96:20

in what's inside the neurons rather than outside.

play96:28

It'd be nice to get a textbook on neurology

play96:32

from 50 years in the future, see how much of that

play96:36

stuff mattered.

play96:41

Where's our time machines?

play96:44

Did you have--

play96:44

AUDIENCE: Yeah.

play96:45

Most systems have a state that they prefer to be in,

play96:49

like a state that they're most comfortable.

play96:51

Do you think the mind has such a state,

play96:53

or would it tend to certain places or something?

play96:58

PROFESSOR: That's an interesting.

play96:59

I don't-- how does that apply to living things?

play97:05

I mean, this battle would rather be here than here,

play97:08

but I'm not sure what you mean.

play97:10

AUDIENCE: Well, so apparently, in Professor Tenenbaum's class,

play97:18

he shows this example of a number game.

play97:22

They'll give you a sequence of numbers,

play97:24

and he'll ask you to find a pattern in it.

play97:27

So for example, if you had a pattern like 10, 40, 50,

play97:30

and 55, he asked the class to come up

play97:34

with different things that could be described in the sequence.

play97:37

And between the choice of, oh, this sequence

play97:41

is a sequence of the multiples of 5

play97:45

versus a sequence of the matter of 10 or multiples of 11,

play97:51

he says something like--

play97:52

he phrases it like, the multiples of 5

play97:55

would have a higher [INAUDIBLE] probability.

play97:59

So that got me thinking, why would that be--

play98:03

would our minds have a preference

play98:05

for having as few categories as possible in trying

play98:09

to view the world around us, trying to categorize things

play98:12

in as few things as possible is what got me thinking about it.

play98:17

PROFESSOR: Sounds very strange to me, but certainly,

play98:24

if you're going to generate hypotheses, you have to have--

play98:33

the way you do it depends on what this--

play98:37

what does this problem remind you of?

play98:40

So I don't see how you could make a general--

play98:49

if you look at the history of psychology,

play98:52

there are so many efforts to find three laws of motion like

play98:56

Newton's.

play98:58

Is he trying to do that?

play99:04

I mean, here you're talking about people with language,

play99:07

and high-level semantics, and--

play99:19

let's ask him what he meant.

play99:30

AUDIENCE: Professor [INAUDIBLE].

play99:31

PROFESSOR: Yeah.

play99:31

AUDIENCE: This is more of a social question,

play99:33

but there's always this debate about how

play99:35

if AI gets to the point where it can take care of humans,

play99:38

will it ever destroy humanity?

play99:40

And do you think that's something that we should fear?

play99:44

And if so, is there some way we can prevent it?

play99:54

PROFESSOR: If you judge by the recent--

play99:57

by what's happened in AI since 1980,

play100:01

it's hard to imagine anything to fear.

play100:03

But--

play100:04

AUDIENCE: [CHUCKLING]

play100:06

PROFESSOR: But-- funny you should mention that.

play100:15

I'm just trying to organize a conference sometime next year

play100:21

about disasters.

play100:23

And there's a nice book about disasters by--

play100:31

what's his name?

play100:33

The Astronomer Royal.

play100:35

What?

play100:36

AUDIENCE: Martin Rees?

play100:37

PROFESSOR: Martin Rees.

play100:39

So he has a nice book, which I just ordered from Amazon,

play100:44

and it came the next day.

play100:47

And it has about 10 disasters, like a big meteor

play100:54

coming and hitting the Earth.

play100:59

I forget the other 10, but I have it in here somewhere.

play101:03

So I generated another list of 10 to go with it.

play101:07

And so there are lots of bad things that could happen.

play101:17

But I think right now, that's not

play101:26

on the top of the list of disasters.

play101:30

Eventually, some hacker ought to be

play101:33

able to stop the net from working

play101:37

because it's not very secure.

play101:40

And while you're at it, you could probably

play101:43

knock out all of the navigation satellites

play101:48

and maybe set off a few nuclear reactors.

play101:56

But I don't think AI is the principal thing to worry about,

play102:03

but it should very suddenly get to be a problem.

play102:06

And there are lots of good science fiction stories.

play102:10

My favorite is the Colossus series by DF Jones.

play102:15

Anybody know-- there was a movie called The Forbin Project,

play102:21

and it's about somebody who builds an AI,

play102:24

and it's trained to do some learning.

play102:28

And it's also the early days of the web,

play102:33

and it starts talking to another computer in Russia.

play102:39

And suddenly, it gets faster and faster,

play102:42

and takes over all the computers in the world,

play102:45

and gets control of all the missiles, because they're

play102:49

linked to the network.

play102:53

And it says, I will destroy all the cities in the world

play102:58

unless you clear off some island and start

play103:01

building the following machine.

play103:06

I think it's Sardinia or someplace.

play103:09

So they get bulldozers.

play103:15

And it starts building another machine,

play103:17

which it calls Colossus 2.

play103:20

And they ask, what's it going to do?

play103:25

And Colossus says, well, you see,

play103:29

I have detected that there's a really bad AI out in space,

play103:33

and it's coming this way, and I have to make myself

play103:36

smarter than it really quick.

play103:41

Anyway, see if you can order the sequel to Colossus.

play103:49

That's the second volume where the invader actually arrives

play103:54

and I forget what happens.

play103:56

And then there's a third one, which

play103:58

was an anticlimax, because I guess

play104:02

DF Jones couldn't think of anything worse

play104:04

that could happen.

play104:06

AUDIENCE: [CHUCKLING]

play104:08

PROFESSOR: But Martin Rees can.

play104:13

Yeah.

play104:14

AUDIENCE: Going back to her question about example,

play104:19

and if a mind has a state that it prefers to be in,

play104:24

would that example be more of a pattern recognition example?

play104:28

So instead of 10, 40, 50, 55, what

play104:32

if it was [? logistical, ?] like, good, fine, great,

play104:37

and you have to come up with a word that could potentially

play104:43

fit in that pattern.

play104:44

And then that pattern could be ways to answer "how are you?"

play104:48

PROFESSOR: Let's do an experiment.

play104:50

How many of you have a resting state?

play104:57

AUDIENCE: [INAUDIBLE]

play105:01

PROFESSOR: Sometimes when I have nothing else to do,

play105:05

I try to think of "Twinkle Twinkle, Little Star"

play105:10

happening with the second one starting in the second measure,

play105:16

and then the third one starts up in the third measure.

play105:19

And when that happens, I start losing the first one.

play105:23

And ever since I was a baby, when I have nothing else

play105:28

to do-- which is almost never--

play105:34

I try to think of three versions of the same tune at once

play105:38

and usually fail.

play105:41

What do you do when you have nothing else to do?

play105:45

Any volunteers?

play105:47

What's yours?

play105:47

AUDIENCE: I try not to think anything at all.

play105:49

See how long [INAUDIBLE].

play105:50

PROFESSOR: You try not to, or to?

play105:52

AUDIENCE: Not to.

play105:54

PROFESSOR: Isn't that a sort of a Buddhist thing?

play105:57

AUDIENCE: Guess so.

play105:59

PROFESSOR: Do you ever succeed?

play106:00

How do you get out of it?

play106:02

You have to think, well, enough of this nothingness.

play106:07

If you succeeded, wouldn't you be dead?

play106:09

AUDIENCE: [CHUCKLING]

play106:12

PROFESSOR: Or stuck?

play106:13

AUDIENCE: Eventually, some stimulus

play106:15

will appear that is too interesting to ignore.

play106:17

AUDIENCE: [CHUCKLING]

play106:18

PROFESSOR: Right, and the threshold

play106:20

goes down till even the most boring thing is fascinating.

play106:24

AUDIENCE: Yeah.

play106:24

AUDIENCE: [CHUCKLING]

play106:27

PROFESSOR: Make a good short story.

play106:31

Yeah.

play106:32

AUDIENCE: There was actually a movie that really

play106:35

got to me when I was little.

play106:36

These aliens were trying to infiltrate people's brains,

play106:41

and like their thoughts.

play106:42

And to keep the aliens from infiltrating your thoughts,

play106:46

you had to think of a wall, which didn't

play106:49

make any sense at all, but--

play106:50

AUDIENCE: [CHUCKLING]

play106:52

AUDIENCE: But now, whenever I try to think of nothing,

play106:55

I just end up thinking of a wall.

play106:57

AUDIENCE: [LAUGHING]

play107:04

PROFESSOR: There are these awful psychoses, and about every bout

play107:12

every five years, I get an email from someone

play107:19

who says that, please help me, there's

play107:23

some people who are putting these terrible ideas

play107:26

in my head.

play107:27

Have you ever gotten one, Pat?

play107:30

And they're sort of scary, because you

play107:35

realize that maybe the person will suddenly

play107:39

figure out that it's you who's doing it, if they--

play107:42

AUDIENCE: [CHUCKLING]

play107:50

AUDIENCE: [INAUDIBLE] husband [INAUDIBLE]

play107:52

all them together once, and I think they married.

play107:54

AUDIENCE: [LAUGHING]

play108:00

PROFESSOR: I remember there was once--

play108:06

one of them came to visit--

play108:09

actually showed up, and he came to visit Norbert Wiener, who

play108:13

is famous for--

play108:16

I mean, he's the cybernetics person of the world.

play108:22

And this person came in, and he got

play108:25

between Wiener and the door, and started

play108:30

explaining that somebody was putting dirty words in his head

play108:35

and making the grass on their lawn die.

play108:40

And he was sure it was someone in the government.

play108:43

And this was getting pretty scary.

play108:48

And I was near the door, so I went and got [INAUDIBLE]----

play108:56

it's a true story-- who was nearby,

play108:58

and I got [INAUDIBLE] to come in.

play109:02

And [INAUDIBLE] actually talked this guy down, and took him

play109:06

by the arm, and went somewhere, and I don't know what happened,

play109:10

but Wiener was really scared, because the guy kept

play109:16

keeping him from going out.

play109:19

[INAUDIBLE] was big.

play109:21

Wiener's not very big.

play109:23

AUDIENCE: [CHUCKLING]

play109:29

PROFESSOR: Anyway, that keeps happening.

play109:33

Every few years, I get one.

play109:35

And I don't answer them.

play109:40

He's probably sending it to several people.

play109:42

And I'm sure one of them is much better at it than we are.

play109:49

How many of you have ever had to deal with a obsessed person?

play109:54

How did they find you?

play109:57

AUDIENCE: I don't know.

play109:58

They found a number of people in the media lab, actually.

play110:04

PROFESSOR: Don't answer anything.

play110:07

But if they actually come, then it's not clear what to do.

play110:23

Last question?

play110:28

Thanks for coming.

Rate This

5.0 / 5 (0 votes)

相关标签
人工智能认知科学问题解决创造力机器意识MIT课程心理学神经科学伦理学技术发展未来趋势