7. Layered Knowledge Representations

MIT OpenCourseWare
4 Mar 2014109:44

Summary

TLDR在这段对话中,马文·明斯基(Marvin Minsky)与听众探讨了人工智能、认知心理学、计算机科学以及大脑和学习的本质。明斯基提出了关于人工智能未来研究方向的问题,强调了跨学科研究的重要性,并讨论了弗洛伊德的心理结构模型,将其与人工智能的层级结构进行比较。他深入解释了人类如何处理目标和欲望,以及大脑如何通过不同的机制来减少当前状态和期望状态之间的差异。此外,明斯基还探讨了记忆如何在大脑中存储,特别是短期记忆在杏仁核中的作用,以及睡眠对记忆长期保持的影响。他提出了关于记忆、语言和学习的一些理论,包括K-line理论,并讨论了数学学习的难点,强调了在教育中采用多种表征方式的重要性。整个对话涵盖了广泛的主题,展示了明斯基对人类认知和人工智能的深刻见解。

Takeaways

  • 📚 马文·明斯基(Marvin Minsky)讨论了人工智能领域的未来研究方向,强调了跨学科合作的重要性。
  • 🤔 明斯基提出了关于人工智能的哲学问题,例如智能的本质和意识的工作原理。
  • 🧠 他探讨了人类思维的层次结构,类比于弗洛伊德的本我、自我和超我,以及这些层次如何相互作用。
  • 📈 明斯基强调了在科学理论中留出空间以适应新想法的重要性,而不是试图将复杂的思想简化为最基本的机制。
  • 💡 他提到了关于人类记忆和学习的理论,包括短期记忆如何转化为长期记忆,以及这一过程中可能涉及的大脑结构。
  • 🎓 明斯基讨论了教育系统中如何教授和学习数学,强调了使用多种方法和类比来理解复杂概念的重要性。
  • 🎵 通过音乐和艺术的类比,明斯基说明了非语言形式如何帮助人们理解和表达复杂的概念。
  • 🧐 他提出了关于语言和思维之间关系的问题,探讨了语言如何影响我们的认知和思考过程。
  • 📘 明斯基还提到了他对《情感机器》(The Emotion Machine)一书中关于情感和情绪在人类认知中作用的讨论。
  • 🤝 他讨论了团队合作的重要性,以及如何通过团队中不同个体的协作来解决复杂问题。
  • 🌐 明斯基强调了人工智能发展中需要考虑的伦理和社会问题,包括隐私、自主性和人类工作的未来。

Q & A

  • MIT OpenCourseWare是如何提供高质量教育资源的?

    -MIT OpenCourseWare通过社会各界的支持和捐赠来继续提供高质量的教育资源。观众可以通过访问ocw.mit.edu来获取来自数百门MIT课程的额外材料并进行捐赠。

  • Marvin Minsky提到的关于AI研究的下一个方向是什么?

    -Marvin Minsky在对话中提到,AI研究者应该关注的问题包括对意识、学习机制、以及如何更好地模拟人类认知过程的深入研究。他还提到了关于目标、记忆和认知表示的理论。

  • Freud的心理结构理论在AI领域有什么启示?

    -Freud的心理结构理论,包括本我、自我和超我,为理解人类复杂行为提供了一个框架。在AI领域,这可以类比为不同层次的计算模型,每一层都处理不同类型的信息和任务,相互之间存在冲突和协调。

  • Marvin Minsky如何描述他对人类意识的理解?

    -Marvin Minsky将人类意识描述为由多个层次组成,包括基本本能、社会学习、理想、反思性思考和自我反思等。他认为这些层次之间存在相互作用,并且提出了“心智社会”理论,即大脑中不同的模块或“代理人”如何相互作用形成复杂的思维和行为。

  • Lisp语言在AI发展史中的重要性是什么?

    -Lisp语言是AI发展史上的里程碑,因为它是第一个能够操作符号和表达式的语言,这使得它在AI研究中用于模拟人类思考和逻辑推理方面具有独特优势。Marvin Minsky和John McCarthy对Lisp的贡献对AI领域产生了深远影响。

  • Marvin Minsky对于人类记忆的K-line理论是什么?

    -Marvin Minsky提出的K-line理论是一种关于人类记忆如何在大脑中存储和检索的假设。他认为记忆可能以一种动态的形式存储在神经元的循环连接中,并且通过睡眠等机制在大脑的不同区域之间转移和固化。

  • 在对话中,Marvin Minsky提到了哪些关于学习数学的看法?

    -Marvin Minsky在对话中提到,学习数学时,拥有多种不同的表示方法非常重要。他强调了理解数学概念时使用多种方法和类比的重要性,并且提到了通过不断练习和重复来优化这些表示方法。

  • Marvin Minsky对于教育系统的看法是什么?

    -Marvin Minsky认为教育系统应该鼓励学生使用多种方法解决问题,并且强调了在教育中提供多种表示方法的重要性。他还提到了教育中缺乏对于如何优化学习过程的探讨。

  • 在对话中,Marvin Minsky对于音乐和数学的关系有何见解?

    -Marvin Minsky在对话中提到,尽管音乐和数学是两个不同的领域,但它们之间存在某种联系。他认为,擅长数学的人不一定擅长音乐,因为音乐可能需要对声音的表示和模式有深刻的理解,这与数学的逻辑和抽象思维不同。

  • Marvin Minsky如何看待练习对于学习的影响?

    -Marvin Minsky认为练习不仅是重复,而是通过每次练习中的变化来深化对概念的理解和掌握。他提到,通过变化练习可以帮助学习者从不同的角度理解和解决问题,这有助于扩展和优化他们的认知表示。

  • 在对话中,Marvin Minsky对于双语学习者有什么有趣的观察?

    -Marvin Minsky分享了一个关于他女儿学习颜色名称的故事,展示了儿童如何通过组合现有概念来形成新的概念。他还提到了双语学习者可能会遇到的一些现象,比如在回忆特定对话时忘记了使用的是哪种语言。

Outlines

00:00

📚 教育资源共享与AI研究的未来

本段首先介绍了视频内容的创作遵循Creative Commons许可,并鼓励观众支持MIT OpenCourseWare以持续提供高质量的教育资源。接着,通过对话形式,Marvin Minsky与听众探讨了人工智能领域的未来研究方向,提出了关于AI研究者应关注的问题,以及如何形成关于心智的多层次理论。

05:04

🤔 心智的层次结构与知识表示

Marvin Minsky讨论了心智的不同层次,从本能反应到自我意识,并提到了他关于心智的六层模型。他还提到了心理学和计算机科学之间的相似性,以及如何通过不同层次的机制来表示知识或技能。

10:04

💭 目标、差异与决策过程

在这一段中,Minsky探讨了目标的概念,即当前状态与期望状态之间的差异。他解释了大脑如何处理这些差异,并通过改变情境或放弃目标来消除差异。此外,还讨论了关于如何表示和处理这些差异的理论。

15:11

🧠 大脑的组织与记忆

Minsky描述了他对于大脑如何组织的设想,特别是关于记忆的表示和处理。他提到了大脑中神经元的网络,以及如何通过这些网络来表示复杂的信息。此外,他还讨论了关于记忆如何在大脑中存储和检索的理论。

20:11

📈 记忆、学习和认知表示

在这一段中,Minsky继续讨论记忆和学习,特别是关于记忆如何在大脑中编码和存储。他提出了K-line理论,并探讨了记忆的修改和复制问题,以及语言在认知过程中的作用。

25:14

🎓 数学、教育和认知流畅性

Minsky讨论了数学学习的难点,强调了在教育中使用多种表示方法的重要性。他还提到了通过实践和玩乐来获得认知流畅性的重要性,以及如何在数学和音乐等领域中通过不同的方式来优化和扩展认知表示。

30:18

🎼 音乐、数学与创造性思维

在最后一段中,Minsky和听众讨论了音乐和数学之间的联系,以及创造性思维在解决问题中的作用。他们探讨了如何通过不同的方法来解决问题,并通过实践来提高解决问题的能力。

Mindmap

Keywords

💡人工智能

人工智能(AI)是指由人造系统所表现出来的智能行为,这包括学习、推理、解决问题、感知、理解语言等能力。在视频中,Marvin Minsky讨论了AI领域的发展方向和AI研究者应该关注的问题,强调了AI在模拟人类认知和社会化过程中的重要性。

💡心理模型

心理模型是指对人类心理过程的抽象表示,包括情感、认知、动机等。视频中提到了弗洛伊德的本我、自我和超我三个心理模型,以及Minsky自己对于心理模型的看法,强调了这些模型如何帮助我们理解人类行为和决策过程。

💡认知科学

认知科学是一门研究人类认知过程的跨学科领域,包括学习、记忆、思考、语言等。在视频中,Minsky提到了认知科学的一些基本概念,例如如何通过不同层次的组织来理解知识和技能,以及认知科学与人工智能之间的联系。

💡计算机科学

计算机科学是研究计算机及其周围各种现象和问题的学科,包括算法、数据结构、编程语言等。视频中Minsky讨论了计算机科学在模拟人类思维和解决问题方面的潜力,以及如何通过计算机程序来探索和扩展我们对认知过程的理解。

💡Lisp语言

Lisp是一种编程语言,以其出色的符号数据处理能力和强大的宏系统而闻名。视频中Minsky提到了Lisp语言,并且讨论了它如何能够操纵思想的表现形式,以及它在人工智能研究中的重要性。

💡象征性推理

象征性推理是指使用符号和规则来进行逻辑推理的过程,它是人工智能中的一个核心概念。在视频中,Minsky讨论了象征性推理在AI程序中的作用,以及如何通过象征性推理来模拟人类的思考过程。

💡社会化

社会化是指个体通过与社会的互动学习社会规范和行为模式的过程。视频中Minsky提到了社会化对于个体行为的影响,以及如何在AI中模拟这一过程,特别是在如何处理内在本能与社会约束之间的冲突。

💡知识表示

知识表示是人工智能中用于表达知识、信息和数据的方式,它涉及到如何将人类知识转换为计算机可以处理的形式。视频中Minsky探讨了不同的知识表示方法,包括框架、语义网络等,以及它们如何帮助机器模拟人类的认知过程。

💡目标导向

目标导向是指以实现特定目标或结果为导向的行为或过程。在视频中,Minsky讨论了目标在人类行为中的作用,以及如何在AI系统中构建和处理目标,包括如何通过减少当前状态和期望状态之间的差异来实现目标。

💡神经科学

神经科学是研究神经系统的结构、功能、发展、遗传学、生物化学、生理学、药理学和病理学的科学。视频中Minsky提到了神经科学与人工智能的关系,以及如何从神经科学的角度理解人类的认知和记忆过程。

💡心理学

心理学是一门研究人类行为和心理过程的科学,包括情感、认知、动机等。视频中Minsky多次提到心理学理论,特别是弗洛伊德的理论,以及这些理论如何帮助我们理解人类的心理活动和AI中可能的对应机制。

Highlights

MIT OpenCourseWare 通过捐赠支持高质量教育资源的免费提供。

Marvin Minsky 讨论了人工智能领域未来的研究方向。

观众提出了关于当前人工智能研究者应关注的元问题。

Minsky 提出了一个关于思维层次结构的理论,类似于弗洛伊德的本我、自我和超我。

讨论了心理学中关于目标的概念,以及如何在当前状态和期望状态之间减少差异。

Minsky 强调了在人工智能中使用 Lisp 语言的重要性,它能够操作符号和概念。

探讨了人工智能中关于学习机器的挑战,以及如何避免陷入局部最优解。

Minsky 描述了大脑如何处理短期记忆,并且如何将其转移到长期记忆中。

讨论了关于记忆如何在大脑中存储的理论和假设,包括 K-line 理论。

Minsky 提出了关于人类智能和动物智能之间差异的见解,特别是在表征差异方面。

探讨了人工智能中关于意识流和注意力分配的问题。

讨论了关于神经科学和认知科学中存在的理论空白,以及如何填补这些空白。

Minsky 分享了关于如何通过多种方式教授和学习复杂概念的观点。

观众和 Minsky 交流了关于数学学习的难点,以及如何通过不同的表示方法来理解数学。

讨论了在教育系统中如何通过多种表示方法来增强学习效果。

Minsky 强调了在解决问题时,拥有多种解决方案路径的重要性。

探讨了类比在数学和其他领域中的作用,以及如何通过类比来简化复杂概念。

讨论了练习和重复在学习和记忆过程中的作用,以及如何通过变化练习来加深理解。

Transcripts

play00:00

The following content is provided under a Creative

play00:02

Commons license.

play00:03

Your support will help MIT OpenCourseWare

play00:06

continue to offer high quality educational resources for free.

play00:10

To make a donation or to view additional materials

play00:12

from hundreds of MIT courses, visit MIT OpenCourseWare

play00:16

at ocw.mit.edu.

play00:22

MARVIN MINSKY: What do you think AI people should work on next?

play00:35

AUDIENCE: There were lots of questions

play00:36

I was going to ask you before you said your question.

play00:43

I was going to ask you, what kind of questions

play00:46

do you think that AI people should be asking right now?

play00:51

MARVIN MINSKY: That's right.

play00:52

Anybody have a meta question?

play01:05

One good question is--

play01:19

oops.

play01:24

I could focus it better or make it bigger.

play01:42

Is that enough layers?

play01:48

It's possible that I got the idea from Sigmund Freud.

play02:02

Who knows what Sigmund Freud's three layers were?

play02:07

Sure.

play02:08

AUDIENCE: Id, ego, super-ego.

play02:10

MARVIN MINSKY: I can't here.

play02:12

AUDIENCE: Id, ego, super-ego.

play02:26

MARVIN MINSKY: Freud wrote about 30 books.

play02:29

I know because I had a graduate student once

play02:33

who decided to quit computer science

play02:37

and go into emergency medicine.

play02:41

He had an MD, which he wasn't using,

play02:44

and he suddenly got fed up with computer science.

play02:51

So he sold me his set of Freud books, which is--

play03:14

But in Freud's vision, these don't seem to be a stack.

play03:22

But there is this thing, which is basic instincts.

play03:35

I shouldn't say basic--

play03:37

maybe, to a large extent, built in.

play03:50

And this is learned socially.

play04:11

And I think the nice thing about Freud's concept,

play04:15

which as far as I know, doesn't appear much

play04:18

in earlier psychology, is that these conflict.

play04:24

And when a child grows up, a lot of what we call civilization

play04:33

or socialization or whatever comes from taking the built-in

play04:40

instincts--

play04:42

which is, if you see something you like, take it--

play04:45

and the social constraints that say, you should negotiate.

play04:55

And if you want something someone else has,

play04:59

you should fool them into wanting to give it to you,

play05:03

or whatever.

play05:10

So in fact, when I make this big stack of mechanisms,

play05:18

that really--

play05:23

well, that's actually not the organization

play05:30

that the book starts to develop chapter

play05:36

by chapter, which was, instincts,

play05:47

learned reactions, and so forth, up to self-conscious.

play06:02

What's the next to top layer?

play06:07

I've forgotten.

play06:08

AUDIENCE: Ideals?

play06:09

MARVIN MINSKY: Yeah, I guess so.

play06:12

It's hard to tell.

play06:15

The reason I had six layers is that, unlike what people

play06:25

do when making theories in science,

play06:28

is, I assume that whatever ideas I have, they're not enough.

play06:35

That is, instead of trying to reduce the mind

play06:38

to as few mechanisms as possible,

play06:44

I think you want to leave room.

play06:47

If your theory is to live for a few years,

play06:50

you want to leave room for new ideas.

play06:53

So the last three layers, beginning

play06:56

with reflective thought and self-reflective and the ideals,

play07:03

it's hard to imagine any clear boundaries.

play07:07

And at some point, when you make your own theory,

play07:12

maybe you can squeeze it into these extra boxes that I have.

play07:19

A little later in the book, another hierarchy appears.

play07:36

This is imagining, as a functional diagram,

play07:43

to show how knowledge or skills or abilities

play07:48

or whatever you want to call them,

play07:52

the things that people do, might be arranged.

play07:57

And as far as I know, because virtually every psychologist

play08:04

in the last 100 years has suffered from physics envy--

play08:10

they want to make something like Newton's laws--

play08:17

the result has been that-- except for the work of people

play08:20

like Newell and Simon and other pioneers who started research

play08:27

in artificial intelligence, you've

play08:33

heard me complain about neuroscientists,

play08:35

but there's pretty much the same complaint

play08:38

to be leveled against most cognitive scientists, who,

play08:42

for example, try to say, maybe the entire human mind is

play08:48

a collection of rules, if-then rules.

play08:52

Well, in some sense, you could make anything out

play08:54

of if-then rules.

play08:56

But the chances are, if you tried

play09:01

to make a learning machine that just tried to add rules

play09:07

in some beautifully general fashion,

play09:11

I suspect the chances are it would learn for a while

play09:14

and then get stuck.

play09:16

And indeed, that's what happened maybe five or six times

play09:20

in the history of artificial intelligence.

play09:24

Doug Lenat's system, called AM, Automated Mathematician,

play09:30

was a wonderful PhD thesis.

play09:32

And it learned arithmetic completely by itself.

play09:39

He just set up the thing biased so

play09:44

that it would have some concept of numbers,

play09:48

like 6 is the successor of 5 and stuff like that.

play09:53

And it fooled around and discovered

play09:58

various regularities.

play10:00

I think I mentioned it the other day.

play10:04

It first developed the concept of number,

play10:06

which wasn't very hard, because he

play10:10

wrote the whole thing in the AI language called Lisp.

play10:16

How many of you know Lisp?

play10:19

That's a lot.

play10:22

In looking at the biographies of the late John McCarthy

play10:27

all week, there were lots of attempts by the writers

play10:34

to say what Lisp was.

play10:37

[LAUGHTER]

play10:39

The tragedy is, you probably could have described it exactly

play10:43

in a paragraph.

play10:46

Because it's saying that each structure has

play10:53

two parts, and each of those has two parts.

play10:57

I don't remember seeing any attempt

play11:00

to say something about this programming language in--

play11:05

yes.

play11:06

AUDIENCE: How do you imagine the layers interact

play11:11

with each other?

play11:13

MARVIN MINSKY: Those?

play11:15

AUDIENCE: Yeah, those and the six layers.

play11:19

MARVIN MINSKY: I think they're layers of organization.

play11:36

Yes, because a trance frame is made of two frames.

play11:42

But then, if there were a neuroscientist who

play11:48

said, oh, maybe when you see an apple and you're hungry,

play11:54

you reach out and eat it, so you could think

play12:00

of that as a simple reflex--

play12:03

if this, then that.

play12:05

Or you could say, if I'm hungry now

play12:11

and I want to be not hungry, what's a possible action--

play12:15

could I do?

play12:15

And that action might be, look for an apple.

play12:21

Yes.

play12:21

AUDIENCE: So I was teaching my class on the day

play12:23

that McCarthy passed away, and then I was explaining.

play12:26

And then I had some students in the class who weren't even

play12:29

computer scientists, so I was thinking

play12:31

about the same problem, which is,

play12:32

how to explain what Lisp was?

play12:34

And so I said, well, before that,

play12:37

there were languages like Fortran

play12:38

that manipulated numbers.

play12:40

Lisp was the first language to manipulate ideas.

play12:44

MARVIN MINSKY: Mm-hmm.

play12:46

Yes, you can manipulate representations of--

play12:52

yes, it's manipulating the ideas that are represented

play12:56

by these expressions.

play12:59

And one thing that interests me is,

play13:04

another analog between psychology

play13:08

and modern cognitive psychology and computer

play13:13

science or artificial intelligence,

play13:16

is the idea of a goal.

play13:24

What does it mean to have a goal ?

play13:26

And you could say, it's a piece of machinery which says that,

play13:30

if there's a situation--

play13:35

what you have now--

play13:37

and if you have a representation of some future thing--

play13:44

what you want.

play13:49

So of course, at anytime, you're in a situation

play13:53

that your brain is somehow representing maybe five or ten

play13:58

ways, not just one.

play13:59

But what does it mean to have a goal?

play14:04

And what it means is to have two representations.

play14:08

One is a representation you've made of some structure which

play14:12

says what things are like now.

play14:15

And the other is some representation

play14:17

of what you want.

play14:25

I don't think I need that.

play14:28

And the important thing is, what are the differences?

play14:36

And instead of saying, what's the difference,

play14:39

maybe it's good to say, what are the differences?

play14:46

So what does it mean to have a goal.

play14:48

It means to have some piece of machinery turned on.

play14:54

You can imagine a goal that you don't have, right?

play14:57

Like I can say, what's David's goal?

play15:01

It's to get people to go to that meeting.

play15:10

So what you want is to minimize the differences between what

play15:15

you have and what you want.

play15:17

And so there has to be a machinery which does what?

play15:24

It picks out one of these differences

play15:26

and tries to get rid of it.

play15:28

And how can you get rid of it?

play15:31

There's two ways.

play15:35

The good way is to change the situation

play15:38

so that difference disappears.

play15:41

The other way is to say, oh, that would take a year.

play15:45

I should give up that goal.

play15:51

I'm digressing.

play15:53

So you want something that removes--

play15:58

the feedback has to go this way.

play16:00

Let's change the world.

play16:09

And get rid of that difference.

play16:14

Well, how do you get rid of a difference?

play16:16

That depends on-- maybe you have a built-in reflex.

play16:25

Like if you have too much CO2 in your blood,

play16:28

something senses it and tells you to breathe.

play16:32

So you've got built-in things.

play16:40

And the question is, how do you represent

play16:43

these sorts of things?

play16:45

And in fact, I think I got a lot of this idea

play16:47

from the PhD thesis of Patrick Winston, who

play16:51

was here a minute ago.

play16:54

[LAUGHTER]

play17:02

But the question is, how do you represent what you want

play17:07

and how do you represent what you have?

play17:11

And I think the big difference between people and other

play17:16

primates and reptiles and amphibians-- reptiles, fish,

play17:21

and going back to plants and so forth--

play17:25

is that we have these very high-level powerful ways

play17:33

of representing differences between things.

play17:37

And this enables us to develop reflexes for getting rid

play17:44

of the differences.

play17:46

So this is what I think might be a picture of how

play17:50

the brain is organized.

play17:51

And at every level, these things are made of neurons.

play17:55

But you shouldn't be looking at the neurons

play17:58

individually to see how the brain works.

play18:03

It's like looking at a computer and saying,

play18:07

oh, I understand that.

play18:08

All I have to do is know a great deal

play18:10

about how each transistor works.

play18:13

The great thing about a computer is

play18:16

that it doesn't matter how the transistor works.

play18:19

The important thing is to recognize, oh, look,

play18:23

they usually come in pairs--

play18:25

or really four or 10 or whatever-- called a flip-flop.

play18:36

Do you ever see a neuroscientist saying,

play18:38

where are the flip-flops in this or that?

play18:41

It's very strange.

play18:43

It's as though they're trying to develop

play18:46

a very powerful computer without using any concepts

play18:50

from computer science.

play18:52

It's a marvelous phenomenon.

play18:54

And you have to wonder, where did they grow up

play18:59

and how did they stay isolated?

play19:02

AUDIENCE: In the biology department, I think.

play19:06

MARVIN MINSKY: In biology departments.

play19:07

AUDIENCE: Or psychology departments.

play19:10

MARVIN MINSKY: Well, I started out in biology, pretty much.

play19:12

And then I ran across these early papers, one of which

play19:20

was by Lettvin and McCulloch and Pitts and people like that.

play19:28

In the early 1940s, the idea of symbolic neurons appeared.

play19:36

It had first appeared in 1895 or so with the paper

play19:42

that Sigmund Freud wrote but couldn't get published.

play19:46

I think I mentioned that, called Project for a Scientific

play19:50

Psychology.

play19:52

And it had the idea of neurons with various levels

play19:56

of activation.

play19:58

And sometimes you would have a pair of them.

play20:01

And one would be inhibiting the other,

play20:04

and so that could store some information.

play20:07

And he's not very explicit about how these things might work.

play20:10

But as far as I know, it's about the first attempt

play20:15

to have a biological theory of information processing at all.

play20:20

And he was unable to get it published.

play20:26

AUDIENCE: Marvin?

play20:27

MARVIN MINSKY: Yeah.

play20:28

AUDIENCE: Since McCarthy did this, just,

play20:31

do you some reflections on the stuff that he did

play20:34

or his contributions?

play20:35

MARVIN MINSKY: There are a lot of things that he did.

play20:43

I noticed that none of the obituaries

play20:47

actually had any background.

play20:50

What had happened is that what Newell and Simon,

play21:00

they had struggled to make programs

play21:02

that could do symbolic logic.

play21:05

And they made a language called IPL.

play21:19

And IPL was a language of very microscopic operations.

play21:24

Like you have several registers, put a symbol in a register,

play21:33

perhaps do a piece of arithmetic on two symbols

play21:36

if they're numbers.

play21:38

If not, link up--

play21:44

you can sort of make registers artificially

play21:49

in the memory of the computer.

play21:50

And you could take two or three registers.

play21:56

It had instructions for making lists or trees.

play22:00

So you could arrange these in a way that--

play22:10

so here are the three things.

play22:12

And this is just a simple list, but I've

play22:14

drawn it as though these two were subsidiary to that.

play22:20

Of course, that depends on what program

play22:22

is going to look at this.

play22:23

And you could have a program which can say, here are

play22:27

two arguments for a function.

play22:29

And it doesn't matter what order they're in.

play22:31

It just depends how you wrote the function.

play22:33

So Newell and Simon had written a programming language

play22:37

which said, put something in a register,

play22:40

link it to another register, and then perform

play22:46

the usual arithmetic operations.

play22:49

In fact, what they were doing is mostly performing

play22:51

logical operations, because even the early computers

play22:55

had ANDs and ORs and XORs, and things like that.

play23:12

So Newell and Simon had written a program that

play23:17

could deal with Boolean functions

play23:20

and prove little theorems about, not

play23:24

A or B. Is this not A or not B?

play23:29

Is that wrong?

play23:33

I forget.

play23:34

It should have more nots.

play23:36

Anyway, they wrote this beautiful but clumsy thing

play23:41

made of very simple logical primitives.

play23:44

McCarthy, at the same time, IBM had spent 400 man years

play23:51

writing a program called Fortran 1, or Fortran.

play23:56

I'm not sure that people had serial numbers on programming

play23:59

languages yet, because we're talking about the middle 1950s.

play24:06

And McCarthy had been thinking about, how

play24:09

would you make AI programs that could do symbolic reasoning?

play24:14

And he was indeed particularly interested in logic.

play24:20

Backtrack-- in fact, Newell and Simon

play24:22

had got their program to find proofs of most of the theorems

play24:29

in the first volume of Russell and Whitehead's Principia

play24:33

Mathematica, which is a huge two-volume book from about

play24:39

1905, I think, which was the first successful attempt

play24:47

to reduce mathematics to logic.

play24:50

And they managed to get up to calculus

play24:53

and show that differentiation and integration can

play24:56

be expressed as logical functions and variables.

play25:00

It's a great tour de force, because logic

play25:07

itself barely existed.

play25:09

Bool and a few others, including Leibniz,

play25:13

had invented Boolean algebra-like things

play25:17

and around--

play25:21

Frege and others had got predicate calculus with their

play25:26

[INAUDIBLE].

play25:28

And that stuff was just appearing.

play25:30

And Russell and Whitehead wrote this huge book,

play25:34

which got all the way up to describing

play25:37

continuous and differentiable functions and so forth.

play25:40

The first volume was huge and just did

play25:43

propositional calculus, which Aristotle had done some of,

play25:48

also.

play25:49

And anyway, McCarthy looked at that

play25:53

and said, why can't there be a functional language

play25:58

like Fortran that can do symbolic reasoning.

play26:02

And pretty much all by himself, he

play26:06

got the basic ideas from Lisp.

play26:09

They're built on the Newell and Simon experiment.

play26:16

But he basically converted the symbolic system into something

play26:23

like Fortran, which was only able to manipulate numbers

play26:27

into Lisp, which was able to manipulate arbitrary

play26:31

symbols in various ways.

play26:33

And if you want to know more about that,

play26:43

you can find McCarthy's home page at Stanford.

play26:49

I think if you Google search for McCarthy and SAIL--

play27:03

SAIL is Stanford Artificial Intelligence Laboratory--

play27:07

and you'll get his home page.

play27:10

And there's a 1994 article about how he invented Lisp.

play27:16

Yes.

play27:18

AUDIENCE: Right after you started talking about Lisp,

play27:20

you jumped over and you said, here's the now and the want,

play27:24

which is the current state of desire.

play27:27

And I think you were going for some kind of analogy

play27:31

between the symbolic evaluation [INAUDIBLE]

play27:35

and what conscious entities do.

play27:37

However, one of the most beautiful things about Lisp

play27:41

is the homo [INAUDIBLE],, the fact that you

play27:43

have things such as macros.

play27:45

What is there in the conscious entity that

play27:48

is equivalent to the macro?

play27:49

MARVIN MINSKY: Macros?

play27:50

AUDIENCE: Yeah.

play27:52

MARVIN MINSKY: Well, each of these levels

play28:02

are ways of combining things at the previous level.

play28:08

The way I drew it, at the top, there are stories.

play28:13

What's a story?

play28:14

You mention some characters in a typical story.

play28:21

Then you say, and here's a problem

play28:23

these characters encountered.

play28:25

And here's what Joe did to solve the problem,

play28:30

but it didn't work and here are the bugs.

play28:37

The reason that didn't work is that Mary was in the way,

play28:41

so he killed her.

play28:42

And then blah, blah, blah.

play28:45

So that's what stories are.

play28:49

If you just write a series of sentences,

play28:51

it's not a story, even though The New Yorker

play28:56

managed to make those into stories

play28:58

for about a period of 20 years.

play29:01

But a typical story introduces a scene

play29:04

and it introduces a problem, and then it produces a solution.

play29:08

But the solution has a bug.

play29:10

And then the rest of it is how you get around that bug

play29:14

and how you maybe change the goals in the worst case.

play29:20

So a story is a series of incidents.

play29:25

I wonder if I brought my death ray.

play29:34

It looks like I didn't.

play29:43

Oh, but you don't need it with this.

play29:53

You have to be here.

play30:01

Anyway, but what's a story made of?

play30:05

Well, there's situations.

play30:07

And you do something and you got a new situation.

play30:11

So what's that?

play30:13

That's what this thing I called a trance frame, which

play30:18

is a pair of situations.

play30:21

OK, so what's an individual situation?

play30:24

Well, it's a big network of nodes and relationships.

play30:32

And I'm not sure why I have semantic networks down here

play30:39

rather than here.

play30:44

Oh, well, a frame is a collection

play30:49

of representations that's sort of stereotyped or canned,

play30:57

that might have a single word like--

play31:03

oh, just about any word, breaking something.

play31:10

If I say, John broke the X, you immediately

play31:15

say, oh, that's a trance frame.

play31:19

Here is an object that had certain properties, parts,

play31:23

and relationships.

play31:25

And it's been replaced by this thing which

play31:28

has most of the same objects, but different relationships,

play31:33

or one of the objects is missing and it's

play31:36

been replaced by another frame, and so forth.

play31:42

So one question is, what's the relation?

play31:45

So this is a picture of cognitive representations.

play31:50

Everything is made of little parts.

play31:53

And in the society of mind, I described

play31:59

the idea of K-line lines.

play32:02

What's a K-line line?

play32:04

It's an imaginary structure in the nervous system.

play32:10

There's really two kinds.

play32:14

There's a sort of perceptual K-line line, which

play32:17

is something that recognizes that, say, 10 features

play32:22

of a situation are present.

play32:24

And if these are all present, then a certain bunch

play32:30

of neurons or neural activity goes on.

play32:33

And on the other hand, when you think of something,

play32:38

like a word, suppose I say microphone,

play32:44

then you're likely to think of something that has a business

play32:48

end which collects sounds.

play32:50

And it has a stand or maybe it has a handle.

play32:54

And if you're an engineer, you know

play32:57

that probably it has a battery or a transmitter or a wire.

play33:03

So those are the things that I call frames or K-line lines,

play33:08

really.

play33:09

And anyway, chapter 8 of The Emotion Machine

play33:21

talks about that.

play33:22

And there's a lot of detail in the old book, The Society

play33:27

of Mind, about what K-line lines and things like

play33:30

that could be made of.

play33:38

Now, whatever those are, and as far as I know,

play33:50

no matter how hard you look, you won't

play33:53

find any published theories of, what in the nervous system

play33:59

is used to represent the things above that sort of midline

play34:05

there of cortical columns.

play34:12

I suspect that almost everything that the human brain does,

play34:17

that fish and those lower animals or earlier animals,

play34:25

I should say--

play34:25

I'm not sure that higher and lower makes any sense--

play34:31

are probably symbolic processes, where it probably

play34:38

does more harm than good to have elaborate theories of,

play34:42

what's the physiological chemistry of neurons.

play34:46

But at some point, we want to know what, in fact,

play34:53

the brain is made of.

play34:54

The cortical column has a few hundred brain cells arranged.

play35:00

And there are several projects around the world trying

play35:07

to take electron microscopes and pieces of mouse brain or cat

play35:11

brain and make a huge connection matrix.

play35:16

The problem is that the electron microscope, even the electron

play35:22

microscope pictures still aren't good enough

play35:25

to show you at each synapse what is probably going on.

play35:31

Eventually, people will get theories of that

play35:33

and get slightly better instruments.

play35:35

And I'm not sure that the present diagrams

play35:38

that they're producing are going to be much use to anyone.

play35:42

Yes-- some nice questions.

play35:44

AUDIENCE: This question might be a little bit difficult.

play35:49

So I'm going to start from the goal being

play35:54

a difference between the now state and the desired state.

play35:59

MARVIN MINSKY: How do we reduce the difference?

play36:01

Yeah.

play36:01

AUDIENCE: Yeah, how do we get the difference?

play36:06

And now, I'm going mean also take

play36:09

the fact of this, that the animals, they're different.

play36:13

And I'd say that one of the biggest big differences

play36:17

is that we, as people, we can describe these differences.

play36:26

MARVIN MINSKY: It actually is only good for the recording.

play36:28

AUDIENCE: OK, so we can describe these differences

play36:33

and talk about them with other people

play36:36

without having to act on those goals or hypothetical goals.

play36:42

So if everything that has to do with the goals

play36:46

is represented with a complex structure like this one,

play36:51

and I want to implement it in a computer,

play36:57

well, for every hypothetical case that

play37:00

has to do with the solving of a certain problem,

play37:03

I can just make more copies in the memory.

play37:06

But if I have a human brain, then I

play37:11

don't know how convenient it is to postulate

play37:16

that there is like a huge memory databank where you can make

play37:19

copies of everything that's going on,

play37:23

or if you have to assume some kind of a huge collection

play37:29

of pointers and acting on those pointers.

play37:32

So this is just a random point, and I'd

play37:36

like to hear if you have any ideas on this.

play37:40

MARVIN MINSKY: That's an important question.

play37:56

We know quite a bit about the functions

play38:02

of some parts of the brain.

play38:06

I don't think I've ever tried to draw a brain.

play38:08

But there's a structure called the amygdala.

play38:19

How do you spell it?

play38:25

I believe that means almond-shaped.

play38:28

Is that right?

play38:29

Anybody have a google handy?

play38:33

And that's down here somewhere.

play38:38

And it has the property that it contains

play38:42

the short-term memories.

play38:44

So anything that's happened in the last couple of seconds

play38:47

is somehow represented here in a trenchant way.

play38:55

And everything that's happened in the last 20 minutes or so

play39:01

leaves traces in the amygdala, so

play39:05

that if somebody is in an automobile accident

play39:07

or is knocked out by a real powerful boxing

play39:14

punch, then when you wake up later,

play39:19

you can't remember anything that happened in about 20 minutes

play39:25

or a half hour before the trauma.

play39:29

So that's a very good experiment to try.

play39:33

And there's a lot of evidence that this

play39:37

happens because the memories of the last half hour or so

play39:43

are stored in--

play39:44

the conjecture is in a dynamic form.

play39:47

Maybe there are huge numbers of loops of neurons connected

play39:52

in circles, or circle.

play39:59

Anyway, nobody really knows.

play40:00

But if you have a bunch of neurons connected

play40:05

in a big circle, maybe 20 or 30 of them,

play40:09

then you can probably put 10 or 20 bits of information

play40:15

in form of different spaced pulses.

play40:18

And a great mystery would be, how does the brain manage

play40:24

to maintain that particular pattern of pulses

play40:27

for 10 or 20 minutes?

play40:30

Then no one knows where it goes after the 20 minutes.

play40:36

But if a person gets enough sleep that night,

play40:42

then it turns out that it's no longer in the amygdala

play40:48

and it's somewhere else in the brain.

play40:50

And so one question is, if there's

play40:55

all this stuff stored here--

play40:58

and you might think of it as what's

play41:02

happened in the primary memory.

play41:04

Every computer starts out with--

play41:10

the first computers had just two or three registers.

play41:14

Then they got 16 and 32.

play41:16

And I imagine-- how many fast registers are there

play41:20

in a modern computer?

play41:22

I haven't been paying attention, maybe 64, whatever.

play41:29

Anyway, but the next day, if you've gotten some sleep,

play41:34

you can retrieve them.

play41:36

Somehow they're copied somewhere else.

play41:39

As far as I know, there is no theory

play41:42

of how the brain decides where to put them

play41:45

and how it records it and so forth.

play41:48

There's something very strange about this big science lacking

play41:52

theories, isn't there?

play41:56

What would you do if you were in a profession

play41:58

where they're talking about dozens and dozens of mechanisms

play42:04

which clearly exist, and you say,

play42:06

how do you think that works, and they say, blah, I don't know.

play42:10

None of my friends know either, so I guess it's OK.

play42:14

[LAUGHTER]

play42:18

So we don't know how it picks a place to put them.

play42:21

And we don't know how, when you ask a question,

play42:24

it gets back there so that you can

play42:26

reprocess it and talk about it.

play42:29

But anyway, so I made up these theories

play42:33

that things are stored in the form of K-line lines.

play42:36

And there's a lot of discussion in The Society of Mind

play42:43

about how K-line lines probably have to work and so forth.

play42:48

And if you look at The Society of Mind in Amazon,

play42:55

you'll find this enraging review by a neurologist named

play42:59

Robert [INAUDIBLE],, who says he introduces

play43:02

these undefined terms called K-line and paranomes and things

play43:06

like that, of which there's whole chapters in the book.

play43:12

AUDIENCE: I've got a question before we go on.

play43:14

MARVIN MINSKY: That's my favorite review

play43:17

of explaining the problem of getting

play43:23

neuroscience to grow up.

play43:25

Yes, who had a question?

play43:28

Yes.

play43:33

AUDIENCE: One thing that puzzles me is, how does the the brain

play43:38

decide to do things?

play43:41

MARVIN MINSKY: How do they decide what to do, did you say?

play43:44

AUDIENCE: Well, say you want to relay a story,

play43:49

but like hundreds of things happened.

play43:55

How do you select what to tell and what not to?

play43:58

It seems like you need some sort of intentionality behind it,

play44:03

but how do we learn to do that in the first place, almost?

play44:08

Like when you describe a goal, you

play44:10

describe what's going on now and what is the thing you want.

play44:15

But then how do you decide which few things to be considered?

play44:23

MARVIN MINSKY: Oh, you're asking a wonderful question.

play44:25

Which is, after all, if I were to talk

play44:30

to you for an hour about your goals,

play44:35

you could tell me hundreds of goals that you have.

play44:39

So the question is, how do you pick the one

play44:42

that you're thinking about now.

play44:44

AUDIENCE: Yeah.

play44:47

Also, how do you represent the priority of goals?

play44:53

They're almost described as like [INAUDIBLE] machines,

play44:56

as turned on all the time.

play44:58

But how do you resolve conflicts?

play45:00

How do you decide which one to go first

play45:03

and which ones [INAUDIBLE]?

play45:04

MARVIN MINSKY: OK, how do you represent

play45:07

the relations of your goals?

play45:10

The standard theory, for which there's no evidence, is that--

play45:17

if I could use the word hierarchy.

play45:20

So if you ask a naive person, they'll

play45:24

give you a pretty good theory--

play45:28

completely wrong, but good.

play45:31

And the standard theory says, well, there's

play45:35

one big goal at the top.

play45:39

And you say, what is it?

play45:40

And some people would say, well, maybe it's to reproduce.

play45:51

Darwin could, but didn't argue that.

play45:54

Or to survive or something like that.

play46:07

OK, so if you take that one, then it's

play46:11

obvious what the next goals in the hierarchy would be.

play46:16

I'm not saying this is how things work.

play46:18

I don't think it does.

play46:20

So there's food.

play46:23

And there's air.

play46:30

Air gets very high priority.

play46:33

If you put a pillow over somebody,

play46:37

their first priority is to breathe.

play46:39

And they don't even think about eating for a long time.

play46:43

[LAUGHTER]

play46:44

So that's very nice.

play46:49

I don't know what comes after that.

play46:53

If you're out in the cold freezing, then there's temp.

play47:01

The nice thing about air and temp

play47:05

is that, if you have those goals,

play47:07

you can satisfy them in parallel.

play47:10

Because in fact, you don't need much of a hierarchy.

play47:13

The breathing thing has this servo mechanism,

play47:18

where if there's a higher level of CO2

play47:23

than normal in your blood, then, what is it, the vagus nerve?

play47:28

I don't know.

play47:29

They know a lot about which part of the brain

play47:32

gets excited and raises your breathing

play47:35

rate or your heartbeat rate.

play47:41

My favorite animal is the swordfish.

play47:48

How many of you know about brown fat?

play47:52

AUDIENCE: [INAUDIBLE]

play47:55

MARVIN MINSKY: Brown fat is a particular thing

play47:58

found in invertebrates.

play48:01

And it's fat.

play48:03

It's brown, I guess.

play48:06

And it has the property that it can

play48:09

be innervated by nerve fibers.

play48:12

And they cause it to start burning calories.

play48:17

And the swordfish is normally cold-blooded.

play48:22

But its carotid arteries have a big organ

play48:25

of brown fat around them just as the blood comes into the brain.

play48:31

And if you turn on the brown fat,

play48:33

it warms the brain and the IQ of the swordfish goes way up.

play48:37

[LAUGHTER]

play48:38

And it swims faster and uses better evasive tactics.

play48:44

And there are a couple of other cold-blooded animals that are

play48:52

known to have a warm-blooded brain that they can--

play48:56

isn't that a great feature?

play48:57

I wonder, does our brain do a little bit of that?

play49:08

I got lost telling funny stories.

play49:14

So anyway, this K-line idea is very simple.

play49:19

It says that, perhaps the way human memory

play49:25

works is that, here and there you

play49:28

have big collections of nerve fibers

play49:33

that go somewhere and go to lots of cells.

play49:37

Let's say thousands of these cells.

play49:40

And each one is connected to some particular combination.

play49:45

Imagine there's 100 wires here.

play49:48

And each of these cells is connected to, say, 10

play49:53

of those wires.

play49:55

Then how many different cells could you

play49:59

have for remembering different features of what's

play50:02

on this big bus bar?

play50:05

How many ways are there picking 10 things out of 100?

play50:09

It's about 100 to the 10th power.

play50:13

So here's a simple kind of memory.

play50:19

Of course, it'd be useless unless these bits by themselves

play50:24

have some correlation with some useful concept.

play50:28

And then at any particular time on this particular bunch

play50:33

of fibers in the brain, maybe 20 of these are turned on.

play50:39

And if something very important has happened,

play50:42

you send a signal to all these cells and say,

play50:45

any of you cells who are seeing more than 10

play50:51

or 15 of these 20 fibers at this moment

play50:55

should remember that and set themselves to do something

play51:00

next time you see that pattern.

play51:04

Something like that, something has

play51:06

to decide which of these cells is going to copy at this time

play51:10

and so forth.

play51:10

But so there is a theory of how memory,

play51:14

kind of symbolic memory, might work in the brain.

play51:18

And I got this idea from a paper--

play51:21

I don't remember what their idea was-- but by David

play51:25

Waltz and Jordan Pollack.

play51:29

Pollack is a theorist at Brandeis,

play51:35

I guess, who in recent years has turned

play51:39

into some kind of artist, and makes

play51:42

all sorts of beautiful things and simple robots

play51:44

that do this and that.

play51:46

David Waltz, search his web page sometime,

play51:52

because he was here for many years as a graduate student,

play51:57

and then developed beautiful theories of vision.

play52:01

When did Dave move?

play52:03

Do you remember, Pat?

play52:05

Anyway--

play52:06

AUDIENCE: '79?

play52:07

MARVIN MINSKY: Something like that.

play52:12

But as far as I know--

play52:16

so I made up this theory, which is really

play52:19

copied from Waltz and Pollack, but simplified and neatened up.

play52:31

Then I went on and made other theories based on that.

play52:34

But without them, I would have been

play52:38

stuck in some conditioned reflex theory for a long time,

play52:41

I suspect.

play52:45

AUDIENCE: You mentioned the role that the amygdala

play52:48

plays in storing the short-term memory.

play52:51

And you mentioned that [INAUDIBLE]

play52:55

the memories that are stored in there are wiped out.

play52:59

MARVIN MINSKY: Well, that's a question.

play53:01

Presumably, as you grow up, your amygdala

play53:04

gets better at learning what to recognize.

play53:08

But I've never seen any discussion or theory of how

play53:13

much of your short-term memory is--

play53:16

how do you learn and develop and get better

play53:18

at remembering things that are worth remembering?

play53:22

Sorry, go ahead.

play53:24

AUDIENCE: My understanding is that, so whatever memories are

play53:28

stored in the amygdala during--

play53:32

whatever is stored in the amygdala

play53:34

is wiped out after you sleep.

play53:37

What sort of implications does this

play53:39

have about remembering dreams?

play53:42

Because my understanding is that,

play53:45

after you have a lucid dream, the memories of the dream

play53:53

are wiped out after a certain point later on in the day.

play53:57

So what sort of implications does it

play53:59

have on the ability of people to remember something [INAUDIBLE]??

play54:02

MARVIN MINSKY: Good question.

play54:08

There have been some theories.

play54:09

But I think I mentioned Freud's theory, which is that you're

play54:14

not remembering anything.

play54:15

When you wake up, you make up the dream.

play54:20

And I think that's surely completely wrong.

play54:26

But Freud made it up because he was mad at Jung.

play54:29

[LAUGHTER]

play54:34

Jung had a theory that people have telepathic connections

play54:37

with other people.

play54:39

And so he had been Freud's student or disciple

play54:45

for some years.

play54:46

And then he went mystical, and Freud went up the wall,

play54:52

because Jung was obviously very smart and imaginative.

play55:01

I'm trying to think if I've had a very good student who

play55:05

turned mystical--

play55:09

one or two, but not like that.

play55:13

Anyway, that's a great question.

play55:15

And when I said things hang around

play55:17

in the amygdala for 20 minutes, that's just some things.

play55:29

If it takes sleep the next night, which

play55:33

is eight hours or 16 hours later, to solidify it

play55:39

or to copy it in some way into the other parts of the brain,

play55:44

it must still be in the amygdala or somewhere.

play55:47

So in fact, maybe there's the amygdala and some other parts

play55:55

of the brain that haven't been identified

play55:58

that contain slightly different copies of the memories

play56:03

for a longer time and so forth.

play56:05

So who knows where and how they work.

play56:10

Maybe the language centers remember paragraphs of things

play56:15

that you've heard or said for some time, and so forth.

play56:20

I don't think anybody really knows much about--

play56:26

they're very sure about the amygdala,

play56:29

because injuring the amygdala or injecting Novocaine

play56:36

into a blood vessel that goes there

play56:39

has such a dramatic effect on--

play56:41

you just can't remember anything for that short period

play56:44

or half hour or so.

play56:51

Maybe memories are stored in 10 different ways in 10

play56:55

different parts of the brain.

play56:56

Who knows?

play56:58

One problem that I think I mentioned

play57:00

is that, although a great deal has

play57:03

been learned about the brain from modern scanning

play57:07

techniques, almost every result that people

play57:12

talk about is obtained by turning up the contrast

play57:18

so that most of the brain is dark and nerve centers that are

play57:23

highly active show up in your--

play57:26

you've all seen these pictures which

play57:28

show three or four places in the brain lighting up.

play57:33

Well, there's a good chance that, for any particular event,

play57:37

there might be 10 or 20 places that have just

play57:41

increased a little bit.

play57:42

And when they turn up the contrast,

play57:45

all that evidence is lost because those regions all

play57:50

become black or whatever.

play57:53

AUDIENCE: Yeah, I don't know the next part of that.

play57:56

But I would go as far as to say that, probably they

play58:00

have a finding where there are no specific areas,

play58:05

so you have a pretty uniform picture

play58:09

of the changes in metabolism, then you don't make theories

play58:14

or you don't publish that result,

play58:17

because you don't have any clear areas for [INAUDIBLE],,

play58:20

and that nobody knows that, OK, that particular thing was

play58:24

actually exciting [INAUDIBLE].

play58:28

That's just a guess.

play58:29

MARVIN MINSKY: Yeah, it could be that some things involve

play58:31

very large amounts of brain.

play58:33

But I'm inclined to doubt it.

play58:38

Probably you want to turn a lot of things off most of the time

play58:41

so they don't fill up with random garbage.

play58:44

Who knows?

play58:46

Yes.

play58:47

AUDIENCE: And to follow up with that,

play58:49

why do you think the hierarchy of goals is naive?

play58:53

And what specific features of goals

play58:56

do you think that structure doesn't achieve?

play59:00

MARVIN MINSKY: Oh, I didn't finish that, did I?

play59:12

She's asking-- I started to say, what's the hierarchy of goals?

play59:17

But it looks like I got stuck on the well-defined, instinctive

play59:21

goals that you need to stay alive.

play59:24

And I guess my answer is, I don't have any good theories

play59:29

of how you do that.

play59:31

At any time, when you're talking to somebody,

play59:36

you usually have a couple of very clear goals,

play59:39

like, I want to explain this, I want this other person

play59:47

to understand this for this reason.

play59:52

I'm having trouble.

play59:53

Maybe I have to get his or her attention by--

play59:57

and then you get a sub-goal of doing some social thing

play60:00

to convince them to listen to you and all sorts of things.

play60:05

But I just don't have a nice picture.

play60:12

When you're writing an AI program,

play60:15

you usually have goals and sub-goals

play60:18

in a very clear arrangement.

play60:22

Like the theorem-proving programs are wonderful,

play60:26

because you've proved some kind of expression,

play60:31

but the particular theorem you are trying to prove

play60:34

has another condition which is different from

play60:37

this condition.

play60:39

And people have gone quite far in making models of something

play60:51

like theorem-proving, where the world is very simple.

play60:56

If you're proving something in geometry or group theory

play60:59

or a little fragment of mathematics,

play61:03

then there are only 5 or 10 assumptions hanging around.

play61:07

And so you could actually plan a little bit of exhaustive search

play61:11

to go through your four levels.

play61:13

And then you would do something like in a chess program of,

play61:17

over time, discovering it never pays

play61:20

to explore a tree that has this feature because of whatever.

play61:27

Yes.

play61:28

AUDIENCE: Are goals relevant?

play61:30

Like, we always have goals where it's just like something

play61:34

where it's like, when I play chess, maybe I have a goal,

play61:38

but why should I have a goal?

play61:42

Why isn't that like, maybe, I don't know,

play61:46

that goal [INAUDIBLE] at some points of my life.

play61:54

Why are goals important?

play61:57

MARVIN MINSKY: Well, the survival goals

play61:58

are important because if you cross

play62:01

the street without looking, you could do that about 20 times

play62:07

before you're dead.

play62:16

AUDIENCE: So just really the survival goals are important?

play62:20

MARVIN MINSKY: Well, if you don't make a living,

play62:22

you'll starve.

play62:23

So now, if you've committed yourself

play62:27

to being a mathematician, now you

play62:29

have to be a good mathematician or else you'll starve,

play62:33

and so forth.

play62:35

I was a pretty good mathematician.

play62:37

Only my goal was, I had to be the best mathematician,

play62:41

so I quit.

play62:45

You don't want to have a goal you can't achieve.

play62:50

AUDIENCE: Yeah, but is that part of [INAUDIBLE]??

play62:54

MARVIN MINSKY: Well, a lot of people do have one.

play62:56

So it eats up a lot of their time and they're wasting it.

play63:01

I'm not sure what the question is.

play63:06

I think the feature of humans is that they're

play63:10

sort of general purpose.

play63:12

So there are a lot of things people do,

play63:17

which are bad things to do.

play63:20

You can't justify them.

play63:22

You can think of people as part of a huge search process.

play63:26

And as a species or a genetic system,

play63:31

it pays to have a few crazy ones every now and then.

play63:35

Because if the environment suddenly changes,

play63:38

maybe they'll be the only ones who survive.

play63:42

But William Calvin's question, how come people

play63:45

evolved intelligence so rapidly in five million years?

play63:51

And he attributes it partly to five--

play64:00

how many periods of global cooling were there?

play64:04

It's about six or seven ice ages in the last--

play64:09

anybody know the history of the Earth?

play64:14

Anyway, some evidence-- at least used to be,

play64:17

I haven't paid any attention--

play64:19

is that the human population's got down to maybe

play64:22

just tens of thousands several times in the last million

play64:26

years.

play64:27

And so only the really different ones managed to pull through.

play64:33

It might be the one who had all sorts of useless abilities.

play64:38

Yes.

play64:39

It was the ones who ate the others, which would

play64:44

have been punished before that.

play64:47

Go ahead.

play64:47

AUDIENCE: You're talking about representing the K-lines

play64:50

and everything like [INAUDIBLE],, some lines

play64:53

and then activating some of these features.

play64:55

So in the case of learning, which

play64:57

have changed some of these connections?

play64:59

And if this is the case, how would this effect the higher

play65:04

order, like higher level, just like frames and [INAUDIBLE]??

play65:08

Like if you change something that's really

play65:09

for low-level representation, will this

play65:13

effect a lot-- like the whole system

play65:14

will break because some stop procedure wouldn't

play65:17

be able to return properly?

play65:18

MARVIN MINSKY: That's great.

play65:20

That's another question I can't begin to answer.

play65:23

Namely, when you learn something--

play65:28

let's take the extreme form--

play65:30

do you start a new representation or you

play65:33

modify an old one?

play65:35

OK, that's a choice.

play65:37

If you modify an old one, can you

play65:40

do that without losing the previous version?

play65:44

So for example, if I grow up monolingually, which I did--

play65:51

so I learned English.

play65:54

And then I can't remember why, but for some reason,

play66:00

around 4th grade, some teacher tried to teach us German.

play66:07

And so for each German word, I try

play66:12

very hard to find the equivalent English word

play66:16

and figure out how to move your mouth so that it comes

play66:20

out German, which actually, for many words,

play66:25

works fine, because English is a mixture

play66:27

of German and other things.

play66:30

And for many words, it doesn't.

play66:32

So that was a bad idea.

play66:34

[LAUGHTER]

play66:35

And if I had gotten very good at it,

play66:38

I could have lost some English.

play66:41

Anyway, that's a great set of questions.

play66:46

When do you make new memories?

play66:48

When do you modify old ones?

play66:50

And the hard one is, if you can't modify an old one,

play66:57

how do you make a copy that's different.

play67:02

And there's a section called, in Society of Mind,

play67:07

about how we do that in language by paraphrasing

play67:13

what someone said.

play67:14

Or you say something in your own head and you misunderstand it.

play67:21

I'm doing that all the time.

play67:22

I say, such and such is a this or a that.

play67:25

And then it's as though somebody else had said that.

play67:28

And I have someone going, no, that's wrong, he meant this.

play67:33

So when you're talking to yourself,

play67:37

you're actually converting some mysterious inarticulate

play67:44

representation to speech.

play67:47

And then you're running it through your brain

play67:49

and listening to it, and converting the speech back

play67:52

to a new representation.

play67:54

So I think the wonderful thing about language

play67:58

is, it's not just for expressing.

play68:04

How come the only animal that thinks

play68:07

the way we do is the only animal that talks the way we do?

play68:13

Who knows?

play68:14

Maybe whales, but nobody has decoded

play68:18

them do something like that.

play68:22

But that's a question.

play68:24

How do you copy a memory to make a new version?

play68:27

And the answer is, you can ask anyone and they'll say,

play68:32

I don't know.

play68:34

Why aren't there five theories of that?

play68:36

Yeah.

play68:38

AUDIENCE: I don't know if that's true,

play68:40

but I believe there is sort of an objective path between words

play68:45

in language-- like for example, know, no.

play68:47

So for example, if I have a table,

play68:52

I say [SPEAKING PORTUGUESE] in Portuguese.

play68:55

There is sort of an objective that--

play68:57

even for colors, which is sort of weird, because you're

play69:01

kind of dividing all colors into [INAUDIBLE] of colors.

play69:09

And even, I believe, for words, languages that were not

play69:14

formed from the same ancestry.

play69:17

I don't know.

play69:18

For me, it seems that there's some sort of objective

play69:22

[INAUDIBLE] between words.

play69:23

AUDIENCE: Well, there are certain things that

play69:25

are just sort of naturally more useful to talk about,

play69:29

and so a lot of languages can have words for the same thing.

play69:32

Like languages will have a word that means table.

play69:35

But if there are some cultures that

play69:37

don't have tables, that they probably wouldn't

play69:39

have a single word for table.

play69:40

And regarding the colors thing, there

play69:42

have actually been some interesting studies

play69:44

done on that.

play69:45

Like they've done color-naming surveys

play69:48

with lots of languages in the world.

play69:49

And it turns out that different languages partition

play69:54

the color space differently.

play69:55

MARVIN MINSKY: I was just curious if anybody knows it.

play69:58

AUDIENCE: They tend to do it similarly.

play70:00

The way that people perceive colors, the way that they

play70:03

divide up the space based on the number of words that they have

play70:06

is actually like maximizing the similarity of things

play70:09

with the same name and the difference between [INAUDIBLE]..

play70:14

AUDIENCE: [INAUDIBLE] did some interesting studies on that.

play70:18

MARVIN MINSKY: Right.

play70:19

I'm trying to remember when children use colors.

play70:25

Let's see.

play70:26

I had a daughter-- who I have still, I must say--

play70:31

and she suddenly started color words,

play70:38

and had six or seven of them in just a couple of days.

play70:43

And she sort of got interested in that.

play70:46

And so suddenly, I said, oh, my gosh, this

play70:49

is what Maria Montessori calls a--

play70:54

I forget what she called it.

play70:56

What's this moment when the child is open to being taught?

play71:01

So I said, this is her chance to learn a lot of new color names.

play71:05

So I said, and what about this?

play71:08

I said that's aqua.

play71:11

And she said, that's blue-green.

play71:12

[LAUGHTER]

play71:15

And I said, no, this is aqua.

play71:19

And she refused.

play71:23

So that was that.

play71:26

So some months later, she learned some more color names,

play71:30

but it wasn't the same.

play71:32

AUDIENCE: When she was saying blue-green,

play71:34

is that because somebody had taught her blue-green,

play71:36

or it was because she was combining those two colors?

play71:38

MARVIN MINSKY: I think she was combining it.

play71:40

Do you remember?

play71:40

MARVIN MINSKY: Yeah, I I think she was combining it,

play71:42

because it looked a little blue and a little green.

play71:45

AUDIENCE: So she had a concept that there

play71:47

were certain building blocks of colors.

play71:49

MARVIN MINSKY: It looked like she wouldn't accept a new one.

play71:52

[LAUGHTER]

play71:54

And I always wondered if I was 15 minutes too late.

play71:59

When does the Montessori door shut?

play72:05

AUDIENCE: Sort of similarly, there

play72:06

are studies done on different languages of how well people

play72:10

distinguish different colors.

play72:11

So in Russian, there's a different word

play72:14

that means light blue and dark blue,

play72:16

so Russians are better at distinguishing different shades

play72:18

of blue than, say, Americans.

play72:20

And there's also a lot of languages

play72:22

that don't have two distinct words for blue and green,

play72:25

and native speakers of those languages

play72:27

have trouble distinguishing blues and greens.

play72:29

MARVIN MINSKY: Oh, that gives me a great idea.

play72:34

If I could have found some unfamiliar objects

play72:37

and colored them aqua, that might have fooled her.

play72:41

[LAUGHTER]

play72:49

Is there a branch of psychology that worries about--

play72:54

of course, there must be names for sounds too.

play72:57

And certainly, a lot's known about ages at which children

play73:03

can't get new phonemes.

play73:07

Yes, Henry.

play73:08

AUDIENCE: I've got a story in Memory and Language.

play73:12

So I'm bilingual.

play73:12

Maybe other bilingual people in the audience can confirm this.

play73:16

MARVIN MINSKY: I didn't know that.

play73:17

What's your other language?

play73:18

AUDIENCE: French.

play73:19

MARVIN MINSKY: For heaven's sake.

play73:22

I've known Henry for 20-odd years.

play73:28

So

play73:28

AUDIENCE: One think that can happen

play73:30

is, you can be talking with another bilingual person,

play73:35

so I can be talking to someone who also both

play73:37

speaks French and English, and then like a week later,

play73:40

I'll remember every detail about a complicated conversation.

play73:43

We were working on this project and we

play73:44

were going to meet at this and all this stuff.

play73:46

I remember every detail, except which language

play73:49

we had the conversation.

play73:51

AUDIENCE: Yeah, my entire family is bilingual.

play73:54

We sort of generally speak a mix of Russian and English

play73:56

all the time, so I can never remember even what language

play74:01

I'm speaking at the time.

play74:02

[LAUGHTER]

play74:03

AUDIENCE: Yeah, and that mystifies me.

play74:05

Because if we store it in a language,

play74:07

how could I forget what language?

play74:10

AUDIENCE: Well, I think there is a very simple answer

play74:12

to that one.

play74:13

You have a language that's neither English or French.

play74:19

And you just have a very simple [INAUDIBLE] there.

play74:24

And then whenever you want to express something

play74:28

in English or French, you would just

play74:31

decode, encode, whatever is the word,

play74:34

[INAUDIBLE] that you were having.

play74:36

AUDIENCE: Well, that's the question.

play74:38

What is that language?

play74:39

Do you have any thoughts on that?

play74:40

MARVIN MINSKY: I don't know.

play74:41

You remind me-- I think I mentioned this--

play74:44

but I was once in a meeting in Yerevan, Armenia.

play74:50

And there was a translator who was practically real-time.

play74:58

And Francis Crick was talking.

play75:02

And at some point, the translator switched

play75:07

and he started translating from English to English.

play75:11

So Crick would say something and the translator

play75:16

would translate it into English, very well, I thought.

play75:20

It wasn't quite the same words.

play75:23

And after a while, somebody asked him

play75:27

why he was doing that, and he said he

play75:31

didn't realize he had switched.

play75:35

Do you think that could happen to you?

play75:37

AUDIENCE: I suppose.

play75:40

AUDIENCE: Do you think that there

play75:41

are other ways of translating ideas and learning them, maybe

play75:45

like art or music, besides language

play75:47

that can be really helpful?

play75:52

MARVIN MINSKY: I don't think there's

play75:53

anything nearly like language.

play75:56

Art is pretty good, but it's so ambiguous.

play76:01

Cartoons, they're awfully good.

play76:04

AUDIENCE: How about Lisp?

play76:06

[LAUGHTER]

play76:07

MARVIN MINSKY: What?

play76:09

AUDIENCE: How about Lisp?

play76:11

Yeah, that was a joke.

play76:12

MARVIN MINSKY: Oh, right, programming languages.

play76:17

Yes, why is mathematics so hard?

play76:20

I wonder if the habit of using a single letter

play76:25

for every variable might make it easy and hard.

play76:33

Who knows.

play76:40

Yes.

play76:40

You have great questions.

play76:41

AUDIENCE: Can you you talk about,

play76:43

last year you mentioned that mathematics is hard.

play76:45

I thought about it.

play76:47

MARVIN MINSKY: Say it again.

play76:49

AUDIENCE: Last year, you mentioned

play76:50

that mathematics is hard.

play76:53

I thought about it, and I do feel

play76:56

like there's an extreme lack of representations of ideas.

play77:03

Solving a problem, we need to identify so many things

play77:08

and there's so many processes that you apply

play77:12

to them without having a name for any of them,

play77:16

or like classification.

play77:19

Well, you have induction and deduction, that's about it--

play77:22

and contradiction.

play77:24

MARVIN MINSKY: Yes.

play77:29

One feature of mathematics is completely unredundant

play77:33

representations.

play77:35

I wonder if there's some way to fix that or change it.

play77:43

What other activity do we have where

play77:47

there's absolutely no redundancy at all

play77:51

in the mathematical expression?

play77:54

So for some people it's delightful, and other people

play77:58

it's very hard.

play78:00

I mentioned Licklider, who in programming, he would

play78:06

have very long variable names.

play78:09

Sometimes they'd even be sentences,

play78:11

like the register in which I put the result of.

play78:16

And the great thing was, you could read those programs.

play78:21

They looked sort of stupid, but he didn't have to--

play78:27

what do you call notes?

play78:29

He didn't have to have, exclamation point, this

play78:32

means that.

play78:34

Comments, comments.

play78:38

AUDIENCE: When I sit in like an [INAUDIBLE] class,

play78:44

I just can't make myself accept the concepts unless I can

play78:47

understand them algebraically and [INAUDIBLE] geometric

play78:51

equation.

play78:52

MARVIN MINSKY: What kind of math do you like?

play78:54

Do you do topology?

play78:55

AUDIENCE: I like topology.

play78:57

I like [INAUDIBLE].

play78:58

MARVIN MINSKY: I love topology.

play79:03

I once was tutoring a high school student

play79:06

who couldn't do algebra.

play79:10

I don't know if I mentioned this.

play79:12

And it turned out he didn't know how to use parentheses.

play79:18

So he would have an expression, stuff like that.

play79:34

But if there were something in there,

play79:47

he didn't know what that meant.

play79:54

He didn't know how to match them.

play79:58

So I couldn't figure out why.

play80:01

And I ask, how come?

play80:02

And he said, maybe I was sick the day

play80:04

they explained parentheses.

play80:07

And so I gave him little exercises

play80:11

like, make them into eggs.

play80:13

And you see, if you make this into an egg,

play80:16

then this egg won't work.

play80:18

[LAUGHTER]

play80:21

And the funny thing was, he got that in five minutes,

play80:26

and then he didn't have any trouble with algebra the rest--

play80:31

can you imagine?

play80:33

I've never done much tutoring but, if you

play80:37

can find the bug like that, it must be great fun.

play80:41

But I bet it doesn't happen very often.

play80:46

That was so funny.

play80:47

I couldn't imagine not--

play80:49

you know?

play80:57

So if you don't you have language, there must be--

play81:04

well, why are some people so much better at math

play81:07

than others?

play81:08

Is it just that they've not understood about five things?

play81:14

AUDIENCE: I feel like they have a set of things

play81:18

they know to go to when they face a problem.

play81:23

Well, that's kind of similar to your [INAUDIBLE] story.

play81:28

But I feel like they know exactly.

play81:32

They have names for concepts of like ways of solving problems.

play81:37

So they look like I'm trying this approach

play81:40

and then if it doesn't work.

play81:42

I know this other approach.

play81:44

I just try it.

play81:45

Instead of looking at the problem and you think, OK,

play81:49

so what possible thing, what possible method,

play81:54

can I think of using?

play81:55

MARVIN MINSKY: Oh, names for methods.

play81:56

So do you have public names or are they secret names?

play82:01

AUDIENCE: I feel like they are secret names.

play82:03

They're just like stores-- because they can't explain it

play82:06

to other people.

play82:07

They can't be like, this problem--

play82:09

like very few of them-- and like this problem and that problem,

play82:12

they have the general same method [INAUDIBLE]..

play82:18

MARVIN MINSKY: I had a friend who was a composer.

play82:22

And she had all sorts of sounds.

play82:24

And they were filed away on tapes.

play82:27

And the tapes had little symbols on them.

play82:31

And if she needed a sound, she would go to the closet

play82:36

and pull out the right tape.

play82:38

And it had more symbols written on the reel.

play82:41

And she'd get this thing, which might be a thunderstorm

play82:45

or a bird or something.

play82:50

So I asked her what was her notation for these sounds.

play82:55

And she giggled and said, I can't tell you.

play82:59

It's too embarrassing.

play83:02

I never found out.

play83:03

[LAUGHTER]

play83:04

But she had developed some code for sounds.

play83:08

Yeah.

play83:08

AUDIENCE: So I would say that they have some [INAUDIBLE]

play83:11

representation of things that require [INAUDIBLE] symbols

play83:18

or patterns of solutions.

play83:22

And their representation is optimal.

play83:26

And if you're good at math, it doesn't

play83:27

mean that you're good at playing music.

play83:29

Because when you play music well,

play83:32

you have maybe a good representation of the sounds.

play83:35

And so it's just [INAUDIBLE].

play83:47

And so you cannot access all--

play83:50

so I have these solutions, these patterns

play83:52

of solutions that I need to--

play83:55

I don't know.

play83:56

I need to solve this math problem, by deduction

play84:01

or by contradiction.

play84:02

And in his brain, or somebody that

play84:05

knows a lot, like the person can access

play84:09

very faster, their representation

play84:12

itself is very well-defined.

play84:13

So you can access [INAUDIBLE].

play84:19

MARVIN MINSKY: That's an interesting whole--

play84:25

let me interrupt.

play84:27

I was once in some class, math class.

play84:35

And it was about n dimensional vector spaces.

play84:40

And some student asked, well, how do you imagine

play84:50

the n dimensional vector space?

play84:54

Its two stories.

play84:56

And the instructor, who I forget who, thought for a while.

play85:03

Then he said, oh, it's very simple.

play85:05

Just pick a particular n So that was completely useless.

play85:12

[LAUGHTER]

play85:19

And I was a disciple of a mathematician

play85:22

at Harvard named Andrew Gleason, who was a wonderful man.

play85:30

Only a couple of years older than me,

play85:32

but he had won the Putnam three times, first prize.

play85:38

And I said what would you tell a student who

play85:44

wanted to understand an n dimensional vector

play85:47

space, what it means?

play85:49

And he said, well, you should give him five or six ways.

play86:06

I don't know.

play86:08

Like imagine a bunch of arrows.

play86:14

And remember that each of them is at right angles to all

play86:16

the others, just like that.

play86:24

Then he added, of if there's an infinite number,

play86:28

you should have the sum of the squares of their lengths

play86:31

converge to a finite value.

play86:34

And then he said, or you should think of it as a Fourier series

play86:39

with things of different frequencies.

play86:45

And then he said, or you should think

play86:48

of an object in a topological space,

play86:55

and each dimension is finding the boundary of the last one.

play86:59

And he went on for about six or seven,

play87:03

and that was a great idea.

play87:08

Well, have seven completely different ways.

play87:11

And I remember I once had the same conversation with Richard

play87:16

Feynman.

play87:18

And I said, well, how did you do that?

play87:19

And he said, well, when I grew up,

play87:23

whatever it was, I always thought of three or four

play87:26

representations.

play87:27

So if one of them didn't work, another one would.

play87:30

What?

play87:31

AUDIENCE: So my idea for somebody,

play87:34

if you ask them about how to understand multiple dimension

play87:39

space, is I'd say, read Flatland.

play87:41

[LAUGHTER]

play87:43

Because that would give you the analogy.

play87:46

Once you had that analogy, then it

play87:49

would be easy to extend it to other dimensions.

play87:51

MARVIN MINSKY: Oh, good.

play87:52

Has anybody written a 4D Flatland,

play87:54

where you make fun of the 3D people.

play87:58

They can't get out of a paper bag.

play88:00

[LAUGHTER]

play88:08

AUDIENCE: And so there will be some events

play88:12

that maybe will prove that.

play88:14

So for example, in my case, when I

play88:16

do a lot of math, when I try to talk to people, it's very hard.

play88:22

And like maybe [INAUDIBLE] my representation

play88:25

of solving problems in math.

play88:28

And people tend to get better in math if they practice it a lot,

play88:34

because they are optimizing their representation of math,

play88:39

and that would be the case.

play88:45

MARVIN MINSKY: I think I understand the problem,

play88:48

but I don't think I have any friends left

play88:50

who are not mathematicians.

play88:51

[LAUGHTER]

play88:54

That's what happens if you live in this place long enough.

play89:02

Yeah.

play89:03

AUDIENCE: So that's one way they're doing better.

play89:05

I feel like the other way they're doing better

play89:07

is, they objectify things that we don't objectify.

play89:13

It's like the learning how to learn better idea,

play89:20

learning how to learn better to learn better.

play89:25

So it's one thing to know what the right representation is

play89:31

for a particular problem, like the right method is.

play89:34

It's another thing to optimize that process.

play89:36

So like, what process did I use in finding that representation?

play89:42

And then they make that into a concept.

play89:44

And then they have a lot of these kinds of concepts.

play89:49

MARVIN MINSKY: Have you ever helped somebody

play89:51

to learn better.

play89:56

AUDIENCE: Yeah, [INAUDIBLE].

play89:58

MARVIN MINSKY: What did you tell them.

play90:00

AUDIENCE: So she had trouble with, basically,

play90:06

two truth tables, something like that,

play90:10

like AND, OR and stuff like that.

play90:14

So her way of seeing a problem is to make a chart.

play90:21

I forget what I told her.

play90:23

I told her to kind of like map simple problems.

play90:29

MARVIN MINSKY: You know, maybe most people

play90:32

don't have the word representation

play90:39

in their language.

play90:42

Is there any place in grade school where you actually

play90:48

talk about, what's your representation of acceleration?

play90:55

Do we teach that word as part of any subject?

play90:58

AUDIENCE: [INAUDIBLE] If you're drawing

play91:03

some base or something like that,

play91:06

it'll ask you to represent it.

play91:09

MARVIN MINSKY: Yes.

play91:16

But it's hard to get out of--

play91:18

yeah, OK.

play91:20

So they're radically different representations of--

play91:25

AUDIENCE: Maybe that's [INAUDIBLE]

play91:28

MARVIN MINSKY: Yes, I don't know.

play91:30

AUDIENCE: [INAUDIBLE]

play91:37

MARVIN MINSKY: Tinker Toys.

play91:39

Tinker Toy.

play91:40

AUDIENCE: What's that?

play91:41

MARVIN MINSKY: Yes, to represent physical structures

play91:45

as Tinker Toys.

play91:49

Yeah, I wrote a little article complaining

play91:53

about the popularity of LEGO as opposed to Tinker Toy.

play92:01

Because the children who grow up with LEGO

play92:05

can't understand how to make something

play92:07

strong by making a triangle.

play92:13

So I sort of had the conjecture that although those people

play92:19

could build all sorts of wonderful houses and things,

play92:23

they ended up deficient in having the most important

play92:30

of all architectural concepts.

play92:34

A triangle is infinitely strong, because you

play92:41

can't alter a triangle without breaking it,

play92:49

whereas, I don't know what.

play92:51

That's a run-on sentence I can't finish.

play92:55

AUDIENCE: This explains the deterioration

play92:57

of society, Marvin.

play93:00

We don't have Tinker Toys and we don't have chemistry sets

play93:03

with chemicals that make explosives anymore.

play93:05

[LAUGHTER]

play93:07

You have to go to terrorist school to get a good education.

play93:11

[LAUGHTER]

play93:15

AUDIENCE: So actually in this conversation

play93:18

about being good at things and learning how to learn better,

play93:21

I think that a point that sort of relates

play93:24

to this idea of Tinker sets and playing around with things,

play93:28

I think that it's not enough to simply come up with the best

play93:31

representations of a concept.

play93:34

In order to actually be good at something,

play93:36

whether it's music or speaking a new language,

play93:39

you have to not only understand it conceptually,

play93:42

but you actually have to gain a certain amount of fluency.

play93:45

And to gain fluency, you do have to play around

play93:48

with the thing a lot, whether it's

play93:50

turning it around in your mind or practicing it physically.

play93:54

So in the case of math, it's like, yeah, you

play93:56

can come up with all these different representations

play93:58

of it.

play93:59

And that's the first step, understanding it.

play94:01

And it's great once you understand it.

play94:03

But just because you understand the concept,

play94:06

like on a conceptual level, doesn't

play94:08

mean that you can actually know when

play94:12

to use it or know how to use it when you're solving a problem.

play94:16

And similarly, for music--

play94:20

I guess I'm mostly talking in the case

play94:21

of improvisational music when I'm trying to speak something

play94:24

with the music.

play94:25

So I have something that I want to say.

play94:27

And maybe it's something that sort of low level.

play94:29

I'm trying to resolve one chord to get to some sort of cadence.

play94:34

Now, I can have multiple ways of resolving the chord.

play94:37

And in order to do this, I have a vocabulary

play94:40

of the different ways.

play94:41

And if one way doesn't occur to me when I'm playing the piece,

play94:45

I can try another way.

play94:46

But the important thing is that I

play94:47

have some way that I can resolve it in real-time, or else

play94:51

my piece is never going to come out.

play94:53

And then same thing about learning different languages

play94:55

or speaking different languages.

play94:57

In order to be able to speak or to express ourselves,

play95:00

we have to have not only understanding of the language,

play95:04

of the structure, but the immediacy

play95:06

of being able to access it.

play95:07

And that comes with practice, with fingering, what have you.

play95:13

MARVIN MINSKY: Well, that goes in several directions.

play95:22

Where in our educational system do we--

play95:28

in grade school, is there a place

play95:29

where you emphasize having several representations?

play95:35

Because I can't think clearly right now.

play95:37

But it seems to me that you're usually trying

play95:39

to tell them the one best way.

play95:45

AUDIENCE: It's like, A, there's the idea of the one best way,

play95:49

and B, there's the idea of reinforcing the same process

play95:54

over and over again.

play95:55

So when you learn math, it's like, you learn this technique,

play95:57

and you reinforce the technique by doing a bunch of homework

play96:00

problems that are essentially like repetitions

play96:02

of the same thing.

play96:03

Whereas, I think a better way of doing it-- well,

play96:05

two better ways--

play96:07

A, you have multiple representations.

play96:08

And 2, you create problems where you make people traverse paths

play96:13

differently.

play96:14

And different people may have different solutions.

play96:16

And each time you solve the problem

play96:17

you may have a different solution.

play96:19

But the idea is, you lay out a whole network of paths

play96:23

in your head to solve any given type of problem.

play96:25

MARVIN MINSKY: OK, so where in grade school

play96:27

do you ask children to solve the same problem three ways?

play96:32

Can anybody think of--

play96:34

is that part of education.

play96:36

AUDIENCE: [INAUDIBLE] fractions.

play96:37

MARVIN MINSKY: What?

play96:37

AUDIENCE: That's the the closest thing I think of--

play96:40

fractions.

play96:42

AUDIENCE: What about literature class, where they ask you

play96:45

for interpretations of novels.

play96:46

MARVIN MINSKY: Yes.

play96:49

I bet there are things that happen

play96:50

in literature that don't happen anywhere

play96:53

else in the curriculum.

play96:55

But most children don't transfer it.

play97:00

AUDIENCE: Another thing, in China, in learning math,

play97:04

when we try to find areas of certain geometric shapes,

play97:09

we always do it multiple times, multiple ways.

play97:13

MARVIN MINSKY: In topology, whatever it is,

play97:16

you just make it into triangles and simplexes.

play97:19

[LAUGHTER]

play97:20

So that's a very strange subject.

play97:24

AUDIENCE: Maybe [INAUDIBLE] so nice is

play97:28

because we can think about it as logical concepts,

play97:32

like [INAUDIBLE] sets, [INAUDIBLE] points,

play97:36

stuff like that.

play97:38

And then you [INAUDIBLE]

play97:43

MARVIN MINSKY: Co-sets.

play97:45

Where in real life do you have duality?

play97:48

That's a nice feature of a lot of mathematics.

play97:53

Whatever you're doing in some fields,

play97:54

there's a dual way, where you look

play97:57

at the space of the functions on the objects rather

play98:01

than the objects.

play98:05

Where is that in--

play98:08

is there anything like that in real life?

play98:14

Because in mathematics, a lot of problems

play98:17

suddenly become much easier in their dual form.

play98:21

It would just change everything.

play98:28

AUDIENCE: There's a question, Marvin.

play98:31

MARVIN MINSKY: I've been facing one way.

play98:35

AUDIENCE: I guess a couple of points--

play98:38

you said, why is it difficult?

play98:41

Whenever I've struggled, I think it's because it's constructive,

play98:46

and you have to code a lot in your head.

play98:48

You have to code the entire structure of [INAUDIBLE] field.

play98:52

Because if you're learning algebra topology,

play98:54

you're holding all of algebra and all

play98:56

of that structure in your head.

play98:57

And so sometimes it becomes difficult

play98:59

if you have it constructed at the right level.

play99:02

And what I found, I think, my advisor was just really good.

play99:05

Many times, he basically said two things.

play99:08

Really good mathematicians are really good at making analogies

play99:11

in mathematics.

play99:13

And [INAUDIBLE] geometry.

play99:16

And he says, and really good algebraic geometers

play99:18

can boil everything down to linear algebra.

play99:21

And he said you can only do that if you

play99:23

abstract at the right level.

play99:26

And he never gave techniques of doing that.

play99:28

But I think the difficulty with that [INAUDIBLE]..

play99:34

AUDIENCE: An analogy is the relationship

play99:36

between two objects.

play99:38

And if you're good at making analogies, you're at a level

play99:43

beyond, a level above just looking at objects.

play99:46

You're looking at relationships of objects.

play99:48

And regarding the practicing thing,

play99:50

I mean, it's still related to representations,

play99:54

because at each practice you're learning something new

play99:58

about these type of problems that

play100:01

might make you better at identifying them in the future.

play100:06

And it's not like numerous practices.

play100:09

The number of practices doesn't matter to your ability

play100:12

of solving problems.

play100:13

It's like, what you learn from each practice,

play100:17

if you can do a thing once--

play100:20

I have a friend who basically told me how to do math.

play100:24

He's like, you look at a problem, solve it once,

play100:26

you go back and you think about how

play100:28

you solved it, like what's the process you used to solve it.

play100:32

MARVIN MINSKY: So a good problem is, make up

play100:34

another problem like this.

play100:37

AUDIENCE: That's essentially what you are learning

play100:39

when you are practicing.

play100:40

MARVIN MINSKY: It's probably too hard to great.

play100:44

You can't teach things you can't grade in the modern--

play100:47

Yes.

play100:48

AUDIENCE: So I believe that math is too abstract.

play100:53

And so it's difficult to go from one to representation

play100:56

to another, and that would be the whole problem.

play100:59

I can't learn a new concept without having

play101:04

a concept that's very near that concept, that's very similar.

play101:09

So it's just that, if I don't have good representations

play101:13

of a lot of things, it's difficult to continue

play101:16

representation.

play101:17

So when I learned, I don't know, topology, I

play101:20

should know analogies.

play101:23

And then I can go from there to there,

play101:25

because the representation is very close.

play101:28

And so people that are good at math,

play101:32

maybe they have a lot of representations

play101:34

so it's easier to add a new representation of a thing,

play101:38

because it's close.

play101:43

MARVIN MINSKY: It certainly would be nice to know.

play101:46

AUDIENCE: Like in the example of vectors,

play101:48

I already have the concept of, I don't know,

play101:51

the perpendicular lines.

play101:54

And so just adding more lines it's easy.

play101:57

But I don't know, the n dimensional thing,

play102:01

it's very abstract.

play102:03

I don't have any other representation

play102:05

that's close by that concept.

play102:09

It's just that I need a lot of concepts and representations.

play102:13

And I need one that's close by.

play102:16

MARVIN MINSKY: Yes, between the vectors and the Fourier,

play102:20

they're so different.

play102:23

What would be in between those two?

play102:31

AUDIENCE: The Fourier is the [INAUDIBLE] kind of concepts.

play102:36

MARVIN MINSKY: Actually square waves--

play102:39

probably square waves are easier to understand

play102:42

than sines and cosines.

play102:51

But they're not continuous--

play102:53

I mean, not differential.

play103:04

Who has a problem to solve?

play103:07

AUDIENCE: I can comment on, I think, [INAUDIBLE] comment

play103:12

on having lots of practice.

play103:16

And I don't think that actually is

play103:19

so much out of the representational view.

play103:26

I don't know what's going on neurologically.

play103:28

But if you have new representations,

play103:32

if you assume that they are symbolic representations,

play103:36

in this sense, they are quite generative things.

play103:38

You can combine them and you can make a lot of stuff.

play103:42

But usually when you will learn something new,

play103:46

like if you learn the rule, there's going to be exceptions.

play103:49

So when you repeat things, one of the reasons for that

play103:55

might be to find that box.

play103:57

You might not [INAUDIBLE]

play104:00

MARVIN MINSKY: Well now, when you practice

play104:02

a little piece of music by repeating,

play104:06

do you change your representation

play104:08

or do you just repeat and hope it gets better?

play104:12

AUDIENCE: So I think there's a difference between--

play104:15

OK, so there's the practice in this sense

play104:18

of traditional classical music practice.

play104:20

And then there's the idea of tinker practice, if you will.

play104:24

So I think that the type of practice that I'm advocating

play104:28

is the type of practice where you're actually

play104:31

sort of turning around the concept or the thing,

play104:34

the object in your head, so that you're

play104:36

looking at it from many different perspectives

play104:38

and connecting it to many different means, which

play104:42

is actually conducive to expanding your representation,

play104:44

connecting it to different concepts,

play104:46

like all the good things that help us remember it and better

play104:49

use it.

play104:51

This is very different from the classical music practice.

play104:56

Having been a classical pianist for 18 years,

play105:00

it's not a really good way of doing things.

play105:04

AUDIENCE: I was wondering, has there

play105:06

been any studies of children, or have there

play105:09

been any children who have just played piano or some keyboard

play105:14

instrument [INAUDIBLE] over their lives,

play105:16

and then they suddenly have something where,

play105:19

when you press it, where you can control the volume?

play105:24

MARVIN MINSKY: A theremin.

play105:26

AUDIENCE: Yeah, yeah, yeah.

play105:27

But if a child very suddenly could

play105:30

learn a completely new dimension, that's the dynamics

play105:34

and how would it react to that.

play105:37

So I don't know if there are any such cases.

play105:40

But I would suspect that it would

play105:41

go to [INAUDIBLE] backward [INAUDIBLE]

play105:45

and then find some children [INAUDIBLE]..

play105:47

MARVIN MINSKY: There was a nice period in the Media Lab

play105:50

when we were building three-dimensional theremin

play105:57

for Penn of Penn and Teller.

play106:01

He's quite a good musician.

play106:04

We were making gadgets so he could wave his hands.

play106:08

But I wonder, in classical, there

play106:13

ought to be some very short pieces that

play106:16

come in 10 variations.

play106:20

Because we make children learn fairly long pieces

play106:26

where it's just repeating.

play106:29

AUDIENCE: There's the Diabelli Variations,

play106:31

which is like 32 or 33 really short pieces.

play106:37

MARVIN MINSKY: Well, but only eight people in the world

play106:40

can play it.

play106:43

AUDIENCE: But I guess, the thing with classical music

play106:45

is that, it kind of just makes you learn this one thing.

play106:50

And you learn it by repeating it over and over again.

play106:54

Whereas in something like jazz that's more improvisational,

play106:56

it's like you have a template.

play106:58

And each time you go through it, you

play106:59

can traverse a different path through it.

play107:01

But even more like classical, classical music,

play107:04

where you're playing the same thing each time,

play107:06

I think that there's still like a good way of doing it

play107:08

and a not so good way, the good way

play107:10

being, each time you practice it, you subtly vary it somehow.

play107:14

Like you change the expression of it,

play107:15

you play it faster or slower, things like that.

play107:19

And I guess this goes back to the whole--

play107:23

it's like, each time you reinforce an idea,

play107:25

like simply repeating it, well, in the beginning,

play107:29

it might help you familiarize yourself with the idea.

play107:32

But if you repeat it and vary it slightly each time

play107:35

to look at different dimensions of it,

play107:37

then you learn it better.

play107:39

MARVIN MINSKY: How many of you know that piece, Beethoven's

play107:44

Diabelli?

play107:46

Well, you should google it up and listen to it.

play107:50

It has 32.

play107:51

Is it 32?

play107:52

AUDIENCE: I think it's 32, and then there's

play107:55

the original theme.

play107:56

So it's like 33 [INAUDIBLE] maybe.

play107:58

MARVIN MINSKY: The one I like is the next to last,

play108:00

which is [WORDLESS SINGING].

play108:04

AUDIENCE: Oh, the fugue, yeah.

play108:05

MARVIN MINSKY: The fugue.

play108:09

So that'll give you another view of classical music,

play108:12

because the pieces are fairly short

play108:15

and they all have some ideas in common.

play108:19

And it's sort of like poetry.

play108:24

What do you call those poems where there are many verses

play108:29

and each verse ends with the same line,

play108:32

but it means something different each time?

play108:37

AUDIENCE: What's an example?

play108:38

MARVIN MINSKY: What?

play108:40

AUDIENCE: What's an example of it?

play108:42

Can you think of a poem?

play108:44

MARVIN MINSKY: I couldn't hear you.

play108:45

AUDIENCE: Well, there's like the villanelle--

play108:47

MARVIN MINSKY: It's like a rondo,

play108:48

except that it changes its meaning each time.

play108:53

And it's the same words.

play108:55

They're pretty hard to make, I guess.

play108:56

AUDIENCE: There's the villanelle,

play108:58

which has a bit of that.

play109:02

There's the famous one that's, what, like, do not go gentle

play109:04

into the night or something.

play109:09

Rage, rage against the coming of something.

play109:14

MARVIN MINSKY: Anyway, I'll email you the Diabelli.

play109:18

I have a friend, Manfred Clynes, who

play109:20

wrote this book called Sentics, who

play109:26

used to play that particular Beethoven thing.

play109:30

Well, last important question.

play109:40

Thanks for coming.

play109:41

[LAUGHTER]

Rate This

5.0 / 5 (0 votes)

Related Tags
人工智能Marvin MinskyMITAI发展人类思维机器交互教育资源OpenCourseWare心理学认知科学计算模型
Do you need a summary in English?