5. From Panic to Suffering

MIT OpenCourseWare
4 Mar 2014116:55

Summary

TLDR在这段视频脚本中,马文·明斯基教授深入探讨了意识和人工智能(AI)的复杂关系。明斯基教授首先提到,一些学者如史蒂夫·平克认为意识问题是AI领域中最重要的问题。他提出,如果不能解决意识的谜题,我们构建的AI系统可能会缺少某些重要的属性。接着,明斯基教授对罗杰·彭罗斯的观点进行了讨论,彭罗斯认为量子力学和哥德尔不完全性定理是AI无法实现的原因。明斯基教授还讨论了关于意识的不同观点,特别是关于“感受质”(qualia)的问题,即不同感官体验之间的区别。他通过举例说明,意识可能不是一个单一的大问题,而是许多小问题的集合。此外,明斯基教授还提出了关于计算机重启时间的问题,以及为什么计算机不能记住上次的状态,这涉及到了计算机科学中的一些有趣问题。最后,他以对意识的进一步思考结束,提出了关于人类意识和机器意识之间可能存在的相似性和差异的思考。整个讨论涵盖了哲学、认知科学和计算机科学等多个领域,展现了明斯基教授对这些复杂问题的深刻见解。

Takeaways

  • 🤖 人工智能领域的一些专家,如Steve Pinker,认为意识问题是人工智能中最重要的问题。
  • 🧐 Marvin Minsky提到,有些AI怀疑者,如物理学家Penrose,认为由于量子力学和哥德尔不完备定理等因素,人工智能是不可能的。
  • 😕 Pinker对于机器如何拥有感受(qualia)表示怀疑,特别是机器能否体验到看红色和看绿色不同的感受。
  • 🔍 Minsky认为意识不是一个单一的大问题,而是许多不同的小问题,每个问题都可以通过编程的方式来探索解决方案。
  • 🚦 在讨论意识时,Minsky提出了一个关于Joan过马路时决策过程的例子,说明意识可能涉及多种不同的心智活动。
  • 🧠 他提出大脑中可能存在不同的“自我模型”,它们在任何时候都在执行不同的任务,如讲述故事、制作视觉表示等。
  • 💡 Minsky讨论了颜色感知的问题,指出颜色的感知并不仅仅取决于一个区域的颜色,而是该区域与附近区域颜色的差异。
  • 📈 他批评了统计学习方法,认为它只能解决简单的问题,而在面对复杂问题时则显得无能为力。
  • 📚 他提到了Solomonoff归纳理论,这是一种考虑所有可能程序来预测数据集的理论,尽管它难以计算,但可以启发人们如何更好地进行预测。
  • 👶 Minsky还谈到了儿童心理学和语言学习,强调了儿童发展的重要性,并对当前儿童心理学研究的缺乏表示遗憾。
  • 📖 他提到了文学作品和科幻小说的价值,认为科幻小说通过探索不同情感和思维方式的生物,提供了更多的想象空间。

Q & A

  • MIT OpenCourseWare是如何提供高质量教育资源的?

    -MIT OpenCourseWare通过社会人士的捐赠来支持其继续提供免费的高质量教育资源。捐赠者可以通过访问ocw.mit.edu来提供支持或查看来自数百门MIT课程的额外材料。

  • Marvin Minsky对于意识问题有何看法?

    -Marvin Minsky认为意识问题并不是一个单一的大问题,而是可以分解为多个较难的问题。他反对将意识视为一个特殊的、不可解的谜题,并认为通过编程和人工智能的方法可以逐步解决这些问题。

  • 为什么计算机重启需要的时间似乎没有随着硬件速度的提升而减少?

    -尽管现代计算机比过去快了1000倍,但重启时间并没有显著减少。观众提出可能是因为计算机需要加载更多的内容。Minsky教授提出疑问,为什么不能加载上次运行的内容,而是每次都重新加载所有内容。

  • Marvin Minsky对于统计学习的看法是什么?

    -Minsky教授认为统计学习在处理具有大量小影响因素的简单问题上很有用,但它并不适用于需要智能假设的复杂问题。他鼓励研究者们应该寻找更智能的方法来提出关于正在发生事情的假设。

  • 什么是Solomonoff归纳,它与统计学习方法有何关联?

    -Solomonoff归纳是一种理论,它假设所有数据都是由程序生成的,并且使用程序的描述长度作为可能程序空间的先验。这与统计学习方法相关,因为它们都涉及到根据数据集做出预测,但Solomonoff归纳更侧重于找到最简洁的程序来生成数据集。

  • 为什么Marvin Minsky认为语言不太可能是基于语法的?

    -Minsky教授认为语言不太可能完全基于语法,因为语法无法充分解释语言的递归性和复杂性。他怀疑语言是基于某些类型的框架操作,这些操作可以更好地适应推理系统。

  • 为什么Marvin Minsky认为情感对于人类智能不是特别重要?

    -Minsky教授认为情感主要是为了保持生物的生存,比如通过监测血糖水平来确保进食,但情感并不帮助人类进行复杂的思考或撰写论文。情感在人类智能中的作用被科幻作家过度夸大了。

  • 为什么Marvin Minsky认为孩子们的学习方式对于人工智能的发展很重要?

    -Minsky教授认为孩子们如何学习是一个重要的研究领域,因为它可能揭示了人类智能的发展机制。这对于设计能够模拟人类学习过程的人工智能系统至关重要。

  • Marvin Minsky对于人工智能未来的展望是什么?

    -Minsky教授认为人工智能的未来不仅仅是模拟人类智能,而是要超越人类,解决人类不擅长的问题。他强调了创造能够独立思考和推理的智能机器的重要性。

  • 为什么Marvin Minsky认为我们需要超越我们的生物体?

    -Minsky教授认为,由于生物体的脆弱性和有限性,例如会生病、死亡和衰老,我们需要将自己转变为智能机器人以超越这些限制。这是为了应对未来挑战,如太阳变成红巨星时的地球不再适宜居住的问题。

  • Marvin Minsky对于早期教育的看法是什么?

    -Minsky教授认为早期教育非常重要,他对于尼克松总统未能投入大量资金进行儿童学习研究表示遗憾。他强调了对儿童如何学习进行研究的价值,并认为这是教育改进的关键。

Outlines

00:00

📚 MIT开放课程与意识的探讨

在Marvin Minsky的讨论中,首先提到了MIT开放课程的内容是在Creative Commons许可下提供的,旨在支持高质量教育资源的免费共享。接着,Minsky探讨了意识问题,提到了Steve Pinker等学者认为意识问题是人工智能中最重要的问题。他批评了那些认为机器不可能有意识或感受(qualia)的观点,并提出了自己对意识的看法,认为意识不是一个单一的大问题,而是多个问题的综合。

05:00

🖥️ 计算机重启时间的困惑

Minsky提出了一个关于计算机重启时间的问题,即尽管计算机速度提高了,但重启时间似乎并没有显著减少。他和听众讨论了可能的原因,包括计算机需要加载更多内容,以及用户等待时间的预期。Minsky表达了对当前计算机不能记住之前状态的困惑,并认为这是一个值得思考的问题。

10:02

🧬 生命起源与智能进化

Minsky讨论了生命起源和智能的进化,提到了生命的化学起源和单细胞生物的演变。他指出,在人类发展的历史中,大脑的发展带来了生存的挑战,但通过多样化的思维方式和资源fulness来适应。他还提到了William Calvin关于人类智能发展的观点,以及Aaron Sloman和Daniel Dennett在意识和智能领域的工作。

15:05

🤔 意识的多重问题

Minsky探讨了意识的多个方面,批评了将意识视为一个单一难题的观点。他认为,意识包含了许多不同的问题,如决策、身体状态描述、记忆等。他提出,从编程的角度来看,每个问题都可以通过特定的程序来解决。他还讨论了如何将语言描述转换为视觉图像的问题,并提出了对意识的不同组成部分的看法。

20:16

🎨 感知、颜色和哲学问题

在这一段中,Minsky讨论了感知和颜色的哲学问题,特别是哲学家如何将颜色作为感知区别的一个难题。他质疑了哲学家为何总是挑选颜色作为讨论的焦点,并提出了关于颜色感知的科学解释,包括视觉系统中对不同光谱敏感的细胞。他还讨论了qualia的概念,并提出了对哲学讨论的一些批评。

25:19

: 意识、自我模型与思维过程

Minsky在这一段中深入探讨了意识的含义,通过图解展示了思维过程,并挑选出了四种类型的自模型过程。他讨论了大脑中不同部分如何保持不同历史叙述和表现,并在解决问题的过程中如何进行临时记录和学习。他质疑了意识活动与我们不太能描述的复杂活动之间的差异,并探讨了如何识别这两者。

30:27

🎶 音乐、几何和意识的多维性

Minsky通过音乐和几何学的角度来探讨意识,提出了音乐家如何思考音乐的问题。他比较了作曲家对音乐的理解与普通人的差异,并质疑了是否所有复杂的音乐都是通过复杂的思考过程产生的。他还讨论了常识知识的作用,以及哲学家如何使用简单的例子来讨论感知和自我意识。

35:28

🧠 意识、感觉和大脑的复杂性

在这一段中,Minsky讨论了意识和感觉的复杂性,特别是哲学家如何将感觉视为一个难题。他提出了关于痛苦和痛苦描述的问题,并认为痛苦是大脑中智能部分对其他部分故障的反应。他还讨论了心理学的复杂性,并批评了哲学家对简单状态的过度简化。

40:34

🤖 AI的未来和机器学习的局限性

Minsky在这一段中讨论了人工智能的未来,特别是机器学习的限制。他批评了统计学习方法,并提出了智能假设的重要性。他还讨论了Solomonoff归纳理论,并鼓励学生探索非主流的研究领域,以解决重要问题。

45:35

🧬 儿童心理学和认知发展

Minsky讨论了儿童心理学的重要性,特别是Jean Piaget对儿童如何学习的研究。他提到了Piaget实验室的关闭以及儿童发展研究的缺乏。他还提到了MIT在儿童发展研究方面的一些工作,以及对语言学习和认知发展的一些见解。

50:38

📚 学习、语言和机器智能

在这最后一段中,Minsky和听众讨论了学习、语言和机器智能的话题。他们探讨了儿童如何学习语言,以及是否可以通过类似的方式让机器学习。Minsky表达了对基于语法的语言理论的怀疑,并提出了基于框架操作的语言理解观点。

Mindmap

Keywords

💡意识

意识在视频中被探讨为一个复杂的现象,Marvin Minsky 提出意识不是一个单一的大问题,而是由许多小问题组成。他认为,意识的不同方面,如情感、感知、记忆等,都是大脑不同部分复杂活动的产物。在视频中,Minsky 批评了将意识视为一个不可分割的简单现象的观点。

💡人工智能

人工智能是视频中的核心主题,Marvin Minsky 讨论了人工智能的发展和面临的挑战。他提到了 AI 怀疑论者的观点,例如 Penrose,以及他们对机器能否达到人类智能水平的质疑。Minsky 自己对 AI 的看法更为乐观,认为通过理解人类意识和认知的复杂性,可以推动 AI 的进步。

💡量子力学

量子力学在视频中被提及为 AI 怀疑论者提出的一个论点,他们认为量子力学的原理可能是机器无法实现真正智能的原因。Minsky 引用观众的话来说明这一点,但同时也表达了自己对这种观点的怀疑态度。

💡哥德尔不完全性定理

哥德尔不完全性定理是视频中提到的另一个哲学问题,它与 AI 怀疑论者的观点相关。该定理表明在任何包含基本算术的一致形式系统内,都存在这样的命题:这个命题既不能被证明为真,也不能被证明为假。Minsky 用这个概念来说明为什么有些问题对于机器来说可能是无解的。

💡进化

进化在视频中被用来解释人类智能的发展。Minsky 提到了生命和智能经过数亿年的进化过程,以及人类大脑的发展可能受到了冰河时期等自然事件的影响。进化论为理解人类意识和认知能力提供了一个框架。

💡情感

情感在视频中被讨论为意识的一个方面。Minsky 认为情感对于人类智能至关重要,因为它们帮助我们生存,但他同时指出情感并不直接参与复杂的认知过程,如写论文或解决抽象问题。

💡自由意志

自由意志在视频中被提及,作为一个哲学和社会概念,它与责任和道德判断相关。Minsky 提出自由意志可能是一个社会构建的概念,用于法律和道德判断,但它在科学上的实质可能并不明确。

💡贝叶斯学习

贝叶斯学习是视频中讨论的一种统计学习方法,它基于概率理论来更新对一个假设的信念。Minsky 对这种方法持批评态度,认为它在处理复杂问题时力有未逮,因为贝叶斯方法需要大量的数据和计算能力。

💡机器学习

机器学习在视频中被广泛讨论,Minsky 认为大多数机器学习的方法都是基于统计的,并且对于真正复杂的智能问题可能并不适用。他提倡需要更深层次的理解和推理能力,而不仅仅是基于数据的模式识别。

💡儿童发展

儿童发展在视频中被提及为一个重要的研究领域,它涉及到儿童如何学习和理解世界。Minsky 强调了对儿童学习过程的研究对于理解人类认知和开发人工智能的重要性。

💡语言学习

语言学习在视频中被讨论为儿童发展的一个关键部分。Minsky 提到了孩子们如何通过与周围人的互动学习语言,并且这个过程涉及到统计学习和社会互动。他还指出,尽管孩子们暴露在语言环境中,但他们必须积极参与才能学会使用语言。

Highlights

MIT开放课程资源的持续免费提供依赖于社会的支持和捐赠。

Marvin Minsky讨论了意识问题,认为意识不是一个单一的大问题,而是多个难题的集合。

对于意识的探索,Minsky认为可以将其视为多个子问题来分别解决。

Minsky对Steve Pinker和其他人将意识视为人工智能中最重要的问题的观点表示疑惑。

讨论了关于计算机重启时间的问题,提出了关于计算机科学和技术的一些哲学问题。

Minsky提出了关于计算机记忆状态的问题,探讨了为何计算机不能记住之前的状态。

探讨了关于颜色感知和哲学上所谓的感受性(qualia)问题。

Minsky对于颜色感知的解释提出了疑问,认为颜色的感知可能与我们对颜色的常见事物关联有关。

讨论了关于人工智能的怀疑论者,如Penrose,他从量子力学和哥德尔不完备性定理角度对AI的可能性提出质疑。

Minsky对人类和海豚的大脑发展进行了比较,提出了关于大脑大小和智能的进化问题。

提出了关于人工智能和自然选择的问题,探讨了为何在进化过程中,智能的发展并非总是优势。

Minsky讨论了关于人工智能的未来,包括它如何帮助人类超越生物学限制。

对统计学习方法的有效性进行了批判,认为它们在处理复杂问题时存在局限性。

Minsky提出了关于机器学习和儿童发展的问题,强调了对儿童如何学习进行研究的重要性。

讨论了语言学习的过程,提出语言可能不是基于语法,而是基于框架操作。

Minsky对于人工智能领域的研究提出了建议,鼓励探索非传统和非流行的方法。

强调了人工智能在模拟人类学习过程方面的潜力,尤其是在语言和认知发展方面。

Minsky对于科学和哲学在理解意识和智能方面的局限性进行了深入探讨。

Transcripts

play00:00

The following content is provided under a Creative

play00:02

Commons license.

play00:03

Your support will help MIT OpenCourseWare

play00:06

continue to offer high quality educational resources for free.

play00:10

To make a donation or to view additional materials

play00:12

from hundreds of MIT courses, visit MIT OpenCourseWare

play00:16

at ocw.mit.edu.

play00:21

MARVIN MINSKY: If your have any opinions about consciousness.

play00:27

There is one problem in the artificial intelligence people,

play00:33

is there's a lot of pretty smart people like Steve

play00:40

Pinker and others who think that the problem of consciousness

play00:47

is maybe the most important problem

play00:51

no matter what we do in artificial intelligence.

play01:00

Anybody read Pinker?

play01:02

I can't figure out what his basic view is.

play01:05

But there's a feeling that if you

play01:10

can't solve this all important mystery,

play01:13

then maybe whatever we build will be lacking

play01:20

in some important property.

play01:24

There was another family of AI skeptics,

play01:29

like Penrose, who's a physicist and a very good physicist

play01:35

indeed, who wrote more than I think

play01:40

three different books arguing that AI is impossible because--

play01:52

I'm trying to remember what he thought

play01:54

was missing from machines.

play01:56

AUDIENCE: Quantum mechanics.

play01:58

MARVIN MINSKY: Quantum mechanics was one and Godel's theorem,

play02:05

incompleteness was another.

play02:08

And for example, if you try to prove

play02:13

Godel's theorem in any particular logic,

play02:16

you'll find some sort of paradox appearing

play02:19

where if you try to formalize the proof,

play02:24

you can't prove it in the logical system you're

play02:27

proving it about.

play02:30

I forget what that's called.

play02:33

So there are these strange logical

play02:37

and semi-philosophical problems that bother people.

play02:42

And Pinker's particular problem is,

play02:47

he doesn't see how you could make a machine be conscious.

play02:53

And in particular, he doesn't see

play02:54

how a machine could have a sense called qualia, which

play03:00

is having a different experience from seeing something red

play03:06

and from seeing something green.

play03:08

| you make a machine with two photo cells

play03:11

and put a green filter on one and a red filter in front

play03:15

of the other and show them objects of different colors,

play03:20

then they'll respond differently and you

play03:23

can get the machine to print out green and red and so forth.

play03:35

And he's worried that no matter what you do,

play03:40

the machine will only have some logical descriptions

play03:44

of these things, and it won't have a different experience

play03:48

from the two things.

play03:52

So I'm not going to get into that.

play03:56

I wonder if Word is going to do this all the time until I

play03:58

kill something.

play04:02

What if I put it off screen?

play04:11

That's a good way to deal with philosophical problems,

play04:18

just put them in back where you can't see them.

play04:24

Oh, the picture disappeared.

play04:26

That's really annoying.

play04:33

Everything disappeared.

play04:42

OK, think of a good question while I reboot this.

play04:52

Whoops.

play04:54

Well, how about pressing--

play05:00

that did it.

play05:06

Whoops.

play05:11

That's the most mysterious problem.

play05:13

Does anybody have an explanation of why computers

play05:16

take the same amount of time to reboot,

play05:19

even though they're 1,000 times faster than they were?

play05:24

AUDIENCE: They have to load 1,000 times more stuff

play05:27

nowadays.

play05:28

MARVIN MINSKY: Yes, but why can't it

play05:29

load the things that were running last time?

play05:34

For some reason, they feel they have to load everything.

play05:38

AUDIENCE: Maybe there is a certain amount of time

play05:40

that they think humans are willing to wait,

play05:42

so therefore, they will load as much as they can

play05:44

during that time.

play05:46

Maybe that might be it.

play05:48

I think if they could, they would upload even more,

play05:50

but they can't because that's how the human patience.

play05:53

And so they always run out of that one.

play05:56

MARVIN MINSKY: Does the XO take time to do?

play06:01

AUDIENCE: Yes, it takes several seconds.

play06:05

MARVIN MINSKY: So it keeps it in memory.

play06:09

AUDIENCE: It doesn't have it organized.

play06:14

MARVIN MINSKY: I'm serious.

play06:23

I guess it would.

play06:24

But it doesn't cost much to keep a dynamic memory refreshed

play06:29

for a month or two.

play06:35

If anybody can figure it out, I'd

play06:37

like to know because it seems to me that it

play06:39

should be easy to make Unix remember what state it was in.

play06:47

AUDIENCE: Well, if it remembered exactly what state it was in,

play06:50

it wouldn't be very useful.

play06:52

We'd have to change the statement every time.

play06:54

MARVIN MINSKY: Well, I mean it could

play06:56

know which applications you've been running or something.

play07:05

Anyway it's a mystery to me.

play07:09

For example, in time sharing systems, you have many users.

play07:13

And the time shared system keeps their state working fine.

play07:27

Let's see if this is actually recovered from its bug.

play07:42

Maybe one of those forms of Word doesn't work.

play07:48

That's a bad--

play07:49

AUDIENCE: When computers hibernate and stuff,

play07:51

they say if they have to read the disk,

play07:54

it takes generally on a modern system 30 to 45 seconds just

play07:58

to load its entire memory content from disk.

play08:02

MARVIN MINSKY: That could be the trouble.

play08:06

Why can't it load the part that it needs?

play08:09

But never mind.

play08:12

I'm sure that there's something wrong with this all.

play08:20

But now I've got another bug.

play08:30

That one seems better.

play08:32

Nope.

play08:35

Sorry about all this.

play08:37

I might have to use the backup.

play08:41

Anyway, I'll talk about consciousness again.

play08:46

But I'm assuming that you've read all or most of chapter 4.

play08:52

And we could start out with this kind of question, which

play08:58

is I think of evolution as--

play09:04

is this working?

play09:08

I think of us as part of the result of a 400 million year--

play09:15

400 mega year process.

play09:17

And because the first evidence for forms of life

play09:23

occurred about 400 million years ago, which is pretty long.

play09:31

The earth appears to be about 4 billion years.

play09:35

So life didn't start up right away.

play09:38

And so there was a 100 million years of the first one

play09:45

celled animals.

play09:48

Maybe there were some million years

play09:50

of molecules that didn't leave any trace at all.

play09:55

So before there was a cell membrane,

play09:59

you could imagine that there was a lot of evolution.

play10:02

But nobody has posed a plausible theory

play10:05

of what it could have been.

play10:10

There are about five or six pretty good theories

play10:14

of how life might have started.

play10:17

There had to be some way of making a complex molecule that

play10:23

could make copies of itself.

play10:26

And one standard theory is that if you had just the right kind

play10:30

of muddy surface, you could get some structure that

play10:36

would form on that, peel away, and leave an imprint.

play10:40

But it sounds unlikely to me because those molecules

play10:44

would have been much smaller than the grains of mud.

play10:47

But who knows?

play10:50

Anyway that's 100 million years of one celled things.

play10:55

And then there's 100 million years of things

play11:01

leading to the various invertebrates,

play11:06

and 100 million years of fish, reptile like things

play11:13

and mammals.

play11:15

And we're at the very most recent part

play11:18

of that big fourth collection of things.

play11:28

I think there's a--

play11:29

whoops.

play11:30

Is this not going to work?

play11:34

All right, that's my bug, not MIT's.

play11:59

So humans development, splitting off

play12:04

from something between a chimpanzee and a gorilla,

play12:10

has a history of about 4 Million years.

play12:15

The dolphins developed, which have very large brains somewhat

play12:23

like ours in that they have a big cortex,

play12:28

developed before that.

play12:30

And I forget, does anybody know?

play12:34

My recollection is that they stopped developing

play12:37

about 4 million years ago.

play12:39

So the dolphins brains got to a certain size.

play12:43

The fossil ones of I think about 4 million years ago are

play12:48

comparable to the present ones.

play12:51

So nobody knows why they stopped.

play12:58

But there are a lot of reasons why

play13:03

it's dangerous to make a larger brain.

play13:08

And especially if you're not a fish,

play13:15

because it would be slower and hard to get around

play13:20

and you would have to eat more and that's a bad combination.

play13:24

And other little bugs like taking longer to mature.

play13:30

So if there are any dangers, the danger

play13:35

of being killed before you reproduce

play13:37

is a big hand handicap in evolution.

play13:43

In fact, if you think of the number of generations of humans

play13:48

since presumably they've been living

play13:55

for sort of 20 year lifespan for most of that 4 million years,

play14:02

like other primates.

play14:06

Compare that to bacteria.

play14:09

Some bacteria can reproduce every 20

play14:12

minutes instead of 20 years or 10 years or whatever it is.

play14:19

So the evolution of smaller animals

play14:23

is vastly faster, in fact, by factors of order of hundreds.

play14:27

And so generally these big slow long-lived animals

play14:37

have huge evolutionary disadvantages.

play14:40

Anyway, here's four major ones.

play14:47

So what made up for that and that's why chapter 4.

play15:05

I don't think I wrote anything about this in chapter 4.

play15:09

But that's why it's interesting to ask

play15:13

why are there so many ways to think

play15:15

and how did we develop them?

play15:17

And a lot of that comes from this evolutionary problem

play15:24

that as you got smarter and heavier,

play15:27

it got more and more difficult to survive.

play15:30

So your collection of resourcefulness

play15:34

had to keep track.

play15:35

Well, in that four billion years this only happened once.

play15:42

Well, the octopuses are pretty smart.

play15:45

And the birds, just consider how much a bird does

play15:52

with its sub pea sized brain.

play15:58

But it seems to me that it's hard to generalize

play16:06

from the evolution of humans to anything else because--

play16:12

because what?

play16:13

We must have been unbelievably lucky.

play16:17

William Calvin has an interesting book.

play16:20

He's a neurologist who writes pretty interesting things

play16:24

about the development of intelligence.

play16:27

And he attributes a lot of human superiority

play16:34

to a series of dreadful accidents, namely five or six

play16:39

ice ages, in which the human population was knocked down,

play16:44

nobody knows to how small.

play16:47

But it could have been as small as tens of thousands.

play16:50

And we just squeaked by.

play16:52

And only the very, very, very smartest of them

play16:59

managed to get through a few hundred

play17:02

years of terrible weather and shortage of food and so forth.

play17:09

So that's-- anybody remember the title of--

play17:15

have you read any William Calvin?

play17:18

Interesting neurologist out in California somewhere.

play17:44

There is very small handful of people, including

play17:49

William Calvin, that I think have

play17:52

good ideas about intelligence in general and how it evolved

play18:00

and so forth.

play18:03

And Aaron Sloman is a philosopher

play18:07

at the University of Birmingham in England,

play18:14

who has theories that are maybe the closest to mine.

play18:21

And he's a very good technical philosopher.

play18:27

So if you're interested in anything about AI,

play18:38

if you just search for Aaron Sloman, he's the only one.

play18:42

So Google will find him instantly for you.

play18:46

And he's got dozens of really deep essays

play18:51

about various aspects of intelligence and problem

play18:54

solving.

play18:59

The only other philosopher I think is comparable

play19:03

is Daniel Dennett.

play19:07

But Dennett is more concerned with classical philosophical

play19:11

issues and a little less concerned with exactly how

play19:16

does the human mind work.

play19:18

So to put it another way Aaron Sloman writes programs

play19:24

and Dennett doesn't.

play19:29

AUDIENCE: He's basically a classical philosopher.

play19:32

MARVIN MINSKY: What's that?

play19:34

AUDIENCE: If you're in an argument

play19:35

with a classical philosopher about issues

play19:37

in classical philosophy, Dennett's arguments

play19:40

can back you.

play19:42

MARVIN MINSKY: Yeah.

play19:43

But I'm not sure we can learn very much.

play19:45

AUDIENCE: No.

play19:49

MARVIN MINSKY: I love classical philosophy.

play19:50

But the issues they discuss don't make much sense anymore.

play19:56

Philosophy is where science has come from.

play20:03

But philosophy departments keep teaching what they were.

play20:16

What chapter does this story first appear in?

play20:23

Joan is part way across the street.

play20:27

She's thinking about the future.

play20:31

She sees and hears a car coming and makes

play20:37

a quick decision about whether to back up or run across.

play20:43

And she runs across.

play20:45

And I have a little essay about the kinds of issues

play20:50

there, if you ask what was going on in Joan's mind?

play21:05

This is a short version of a even larger list

play21:09

that I just got tired of writing.

play21:17

And I don't know how different all of these 20 or 30 things

play21:22

are.

play21:23

But when you see discussions of consciousness in Pinker

play21:27

and everyone except Dennett and Sloman,

play21:35

they keep insisting that consciousness

play21:38

is a special phenomenon.

play21:40

And my view is that consciousness is--

play21:54

there certainly a lot of questions to ask.

play21:57

But there isn't one big one.

play22:00

I think Pinker very artistic--

play22:04

art-- I can't think of the right word.

play22:10

He says this is the big central problem.

play22:14

What is this amazing thing called consciousness.

play22:18

And he calls that the hard question of psychology.

play22:24

But if you look at this and say, how did she select the way

play22:28

to choose among options?

play22:30

Or how did she describe her body's condition?

play22:34

Or how did she describe her three most noticeable

play22:39

recent mental states or whatever?

play22:42

Each of those are different questions.

play22:44

And if you look at it as from the point of view

play22:47

of a programmer, you could say, how could a program that's

play22:53

keeping push down lists and various registers and caches

play23:01

and blah, blah, blah, how would a program do this one?

play23:07

How do you think about what you've recently done?

play23:10

Well, you must have made a representation of it.

play23:15

Maybe you had a push down list and we're

play23:17

able to back up and go to the other state.

play23:21

But then the state of you that's wondering

play23:24

how to describe that other state wouldn't be there anymore.

play23:27

So it looks like you need to have

play23:31

two copies of a process or some way to timeshare the processor

play23:35

or whatever.

play23:37

And so if you dwell on this kind of question for a while,

play23:42

then you say there's something wrong with Pinker.

play23:45

Yes, he's talking about a very hard problem.

play23:49

But he's got blurred maybe 20, 30, 100, I don't know,

play23:56

pretty hard problems.

play23:58

And each of these is fairly hard.

play24:01

But on the other hand, for each of them,

play24:05

you can probably think of a couple of ways

play24:07

to program something that does something a little bit

play24:12

like that.

play24:13

How do you go from a verbal description

play24:20

to block supporting a third block to a visual image

play24:27

if you have one?

play24:29

Well, you could think of a lot of ways those--

play24:33

I didn't say what shape the blocks were and so forth.

play24:36

And you can think of your mind.

play24:41

One part of your mind can see the other part

play24:43

trying to figure out which way to arrange those blocks.

play24:46

Maybe all three blocks are just vertically

play24:51

like this, this and this.

play24:53

That's two blocks supporting a third block.

play24:57

And so instead of saying consciousness

play25:01

is the hard problem, you could say consciousness

play25:06

is 30 pretty hard problems.

play25:09

And I bet I could make some progress on each of them

play25:12

if I spent two or three years or if I had 30 students spending

play25:19

or whatever.

play25:24

Actually, that's what you really want

play25:29

to do is fool some professors into thinking

play25:31

about your problem when you're a student That's the only way

play25:36

to actually get anything done.

play25:48

Well, I'm being a little dismissive.

play25:53

And another thing that Pinker and the other people

play25:59

of his ilk, the philosophers who try to find a central problem,

play26:05

do is say, well, there's another hard problem which

play26:08

is the problem called qualia, which

play26:13

is what is the psychological difference between something

play26:18

that's red and green?

play26:20

And I usually feel uncomfortable about that

play26:27

because I was in such a conversation

play26:34

when I discovered that Bob Fono who is one of our professors

play26:39

was color blind.

play26:41

And he didn't have that qualia, so sort of embarrassing.

play26:52

In the Exploratorium, how many of you have been at the--

play26:57

a few.

play26:59

Maybe the best science museum in the world, and somewhere

play27:06

near San Francisco.

play27:12

But one trouble or one feature of it,

play27:19

it was designed by Frank Oppenheimer, who is

play27:22

Robert Oppenheimer's brother.

play27:25

He quite a good physicist.

play27:28

And I used to hang around there when

play27:31

I spent a term at Stanford.

play27:40

And it had a lot of visual exhibits

play27:47

with optical illusions and colored lights doing

play27:50

different things and changes of perspective

play27:54

and a lot of binocular vision tricks.

play27:58

And there's a problem with that kind of exhibit--

play28:04

we have them here in the science museum too--

play28:07

which is that about 15% or 20% of people

play28:11

don't see stereo very well.

play28:14

And at least 10% don't view stereo images at all.

play28:20

And some of these is because one eye's vision is very bad.

play28:26

But actually if one eye is 20/20 and the other eye is 20/100,

play28:32

you see stereo fine anyway.

play28:34

It's amazing how blurred one of the images can be.

play28:40

Then some people just can't fuse the images.

play28:44

They don't have separate eye control or whatever.

play28:48

And a certain percentage don't fuse stereo for no reason

play28:53

that anybody can measure and so forth.

play28:57

But that means that if a big family is looking

play29:01

at this exhibit, probably one of them

play29:04

is only pretending that he or she can see the illusion.

play29:07

And I couldn't figure out any way to get out of that.

play29:12

But I thought if you make a museum,

play29:19

you should be sure to include some exhibits for the--

play29:23

what's the name for a person who only--

play29:27

is there a name for non-fusers?

play29:35

When you get a pilot's license, you

play29:37

have to pass a binocular vision test, which

play29:44

seems awfully pointless to me, because if you need stereo,

play29:49

which only works for about 30 feet, then

play29:54

you're probably dead anyway, maybe the last half

play30:04

second of landing.

play30:11

So anyway, so much for the idea of consciousness itself.

play30:19

You might figure out something to say

play30:26

about the difference between blue and green

play30:29

and yellow and brown and so forth.

play30:32

But why is that really more important than the difference

play30:40

between vanilla and chocolate?

play30:48

Why do the philosophers pick on these particular perceptual

play30:54

distinctions as being fundamentally hard mysteries

play30:58

whereas they don't seem to--

play31:02

they're always picking on color.

play31:05

Beats me.

play31:19

So what does it mean to say--

play31:24

going back to that little story of crossing the street--

play31:36

to say that Joan is conscious of something?

play31:40

And here's a little diagram of a mind at work.

play31:48

And I picked out four kinds of processes

play31:57

that are self models, mock whatever you're doing.

play32:04

There are probably a few parts of your brain

play32:07

that are telling little stories or making

play32:10

visual representations or whatever,

play32:15

showing what you've been doing mentally or physically

play32:20

or emotionally or whatever distinctions you want to draw.

play32:26

Different parts of your brain are

play32:28

keeping different historical narrations and representations

play32:35

maybe over different time scales.

play32:38

And so I'm imagining.

play32:41

I'm just picking on four different things

play32:44

that are usually happening at any time in your mind.

play32:49

And these two diagrams are describing or representing

play32:58

two mental activities.

play33:00

One of which is actually doing something.

play33:05

You make some decision to get something done

play33:07

and you have to write a program and start carrying it out.

play33:15

And the program involves descriptions of things

play33:18

that you might want to change, and looking at records

play33:22

of what usually happens when you do this

play33:25

so you can avoid accidents.

play33:28

So one side of your mind, which is sort of performing actions,

play33:32

could be having four processes.

play33:37

And I'm using pretty much the same--

play33:42

they're not quite.

play33:44

Wonder why I changed one and not the others.

play33:48

And then there's another part of your mind

play33:52

that's monitoring the results of these little actions

play33:56

as you're solving a problem.

play33:58

And those involve pretty much the same kinds

play34:01

of different processes, making models

play34:04

of how you've changed yourself or deciding what to remember.

play34:14

As you look at the situation that you're manipulating,

play34:18

you notice some features and you change your descriptions

play34:22

of the parts so that you were--

play34:24

in other words, in the course of solving a problem,

play34:28

you're making all sorts of temporary records

play34:30

and learning little things, stuffing them away.

play34:42

So the processes that we lump into being conscious

play34:47

involve all sorts of different kinds of activities.

play34:58

Do you feel there's a great difference

play35:00

between the things you're doing that you're conscious of

play35:04

and the often equally complicated things

play35:07

that you're doing that you can say much less about?

play35:13

How do you recognize the two?

play35:18

Do you say I've noticed this interval and that interval,

play35:23

and then in the next four measures

play35:27

we swap those intervals and we put this one

play35:29

before that instead of after?

play35:32

If you look at Twinkle, Twinkle, Little Star,

play35:35

there's a couple of inversions.

play35:37

And if you're a musician, you might, in fact,

play35:44

be thinking geometrically as these sounds are

play35:47

coming in and processing them.

play35:52

Some composers know a great deal about what they're doing.

play35:56

And some don't have the slightest idea,

play35:58

can't even write it down.

play36:00

And I don't know if they produce equally complicated music.

play36:17

What's this slide for?

play36:20

Anyway, when you look at the issues

play36:23

that philosophers discuss like qualia and self-awareness,

play36:30

they usually pick what seem to be very simple examples

play36:35

like red and green.

play36:37

But they don't-- but what am I trying to say?

play36:49

But someone like Pinker a philosopher talking

play36:53

about qualia tend say there's something very

play36:59

different about red and green.

play37:01

What is the difference?

play37:10

I'm just saying, why did I have a slide that mentioned

play37:13

commonsense knowledge?

play37:16

Well, if you've ever cut yourself, it might hurt.

play37:24

And there's this red thing.

play37:27

And you might remember, unconsciously,

play37:33

for the rest of your life that something red signifies

play37:39

pain and uncertainty and anxiety and injury and so forth.

play37:46

And very likely you don't have any really scary associations

play37:56

with green things.

play37:59

So when people say the quality of red,

play38:03

it's so different from green.

play38:05

Well maybe it's like the differences

play38:07

over being stabbed or not.

play38:10

And it's not very subtle.

play38:12

And philosophically it's hard to think

play38:15

of anything puzzling about it.

play38:18

You might ask, why is it so hard to tell

play38:26

the difference between pleasure and pain or to describe it?

play38:31

And the answer is you could go on for hours describing it

play38:34

in sickening and disgusting detail

play38:38

without any philosophical difficulty at all.

play38:44

So what do you think of redness?

play38:46

You think of tomatoes and blood.

play38:48

And what are the 10 most common things?

play38:56

I don't know.

play38:58

But I don't see that in the discussion of qualia.

play39:02

And the qualia of philosophers try

play39:05

to say there's something very simple and indescribable

play39:08

and absolute about these primary sensations.

play39:16

But in fact, if you look at the visual system,

play39:20

there are different cells for those, which are

play39:24

sensitive to different spectra.

play39:26

But the color of a region in the visual field

play39:32

does not depend on the color of that region, so much

play39:36

as the difference between it and other regions near it.

play39:40

So I don't have any slides to show that.

play39:43

But the first time you see some demonstrations of that,

play39:49

it's amazing because you always thought

play39:51

that when you look at a patch of red, you're seeing red.

play39:55

But if the whole visual field is red slightly,

play40:00

you hardly can tell at all after a few seconds

play40:05

what the background color is.

play40:23

So I'm going to stop talking about those things.

play40:33

Who has an idea about consciousness

play40:35

and how we should think about it?

play40:38

Yeah.

play40:39

AUDIENCE: Maybe it's just the K-lines that are in our brain,

play40:42

so the K-lines are different for an average person.

play40:51

MARVIN MINSKY: That's interesting.

play40:55

If you think of K-lines as gadgets in your brain which--

play41:02

each K-line turns on a different activity

play41:08

in a lot of different brain centers perhaps.

play41:11

And I'm not sure what--

play41:17

AUDIENCE: So like at a moment you have a set of K-lines that

play41:21

are active.

play41:22

MARVIN MINSKY: Right, but as you mentioned in different people,

play41:25

they're probably different.

play41:26

AUDIENCE: Yeah, yeah.

play41:28

MARVIN MINSKY: So when you say red and I say red,

play41:32

how similar are they?

play41:33

That's a wonderful question.

play41:35

And I don't know what to say.

play41:37

How would we measure that?

play41:39

AUDIENCE: I know I can receive some--

play41:45

so, for example, a frog can receive some

play41:50

like with his eyes like pixels.

play41:53

And like these structures are the same.

play41:57

Like we can perceive some automatic things.

play42:01

And like this would be the same for us.

play42:03

But when we're growing, we probably create these K-lines

play42:08

for like red or green.

play42:10

MARVIN MINSKY: Right.

play42:11

The frog probably has them built in.

play42:13

AUDIENCE: Yeah.

play42:13

And probably it's very similar because we

play42:16

have centers in our brain.

play42:18

So, for example, for vision, we have a center.

play42:20

And probably like things that are close by

play42:26

will have a tendency to blend together.

play42:34

And so red would be similar to each one of us

play42:38

because it's very low level concept.

play42:41

But if you go higher, it probably, for example,

play42:45

for numbers to have different representation than red.

play42:49

I think there's started off by learning that we represent

play42:52

numbers by saying, like there is another person that presents

play42:58

just by seeing the number.

play43:00

And then you got to see it.

play43:04

MARVIN MINSKY: He has an interesting idea that maybe

play43:09

in the first few layers of visual circuits, we all share.

play43:14

They're pretty similar.

play43:16

And so for the primary--

play43:21

for the first three or four levels of visual processing,

play43:25

the kinds of events that happen for when red and green

play43:31

are together or blue and yellow.

play43:36

Those are two different kinds of events.

play43:38

But the processes in for most of us are almost identical.

play43:44

The trouble is when you get to the level of words

play43:47

that might be 10 or 20 processes away from that.

play43:52

And when you say the word red, then that

play43:58

has probably closer connections to blood and tomatoes

play44:02

than two patches of--

play44:06

anyway it's a nice--

play44:10

AUDIENCE: So like animals still have

play44:13

most of this because they don't have the K-lines.

play44:15

For example, monkeys or dogs, but when you filter,

play44:23

these animals doesn't have the ability to break K-line out

play44:30

of consciousness.

play44:32

And so you will have some kind of--

play44:38

with the animals you have like less social visualization

play44:41

or linear function representation.

play44:46

MARVIN MINSKY: Yes, well, I guess

play44:51

if you're make discrimination tests,

play44:54

then people would be very similar in which color

play44:58

patterns.

play44:59

Did I mention that some fraction of women

play45:06

have two sets of red cones?

play45:13

You know, there are three colors.

play45:18

AUDIENCE: It's between the red and green.

play45:20

MARVIN MINSKY: I thought it was very close to the red, though.

play45:23

AUDIENCE: Very close to red.

play45:25

MARVIN MINSKY: So some women have

play45:27

four different primary colors.

play45:32

And do you know what fraction it is?

play45:34

I thought it was only about 10% of them.

play45:37

AUDIENCE: Yeah, it's 5% of people, 10% of women.

play45:42

MARVIN MINSKY: I thought it's only women.

play45:44

AUDIENCE: It might be.

play45:45

MARVIN MINSKY: Oh, well,

play45:46

AUDIENCE: We could look it up.

play45:54

MARVIN MINSKY: One of my friends has a 12 color printer.

play46:02

He says it costs hundreds of dollars to replace the ink.

play46:08

And I can't see any difference.

play46:12

On my printer, which is a tectonics phaser,

play46:17

this is supposed to be red.

play46:20

But it doesn't look very red to me.

play46:25

Does that look red to any of you?

play46:30

AUDIENCE: Reddish.

play46:31

MARVIN MINSKY: Yeah.

play46:32

AUDIENCE: Purple brownish.

play46:36

MARVIN MINSKY: It's a great printer.

play46:38

It has-- you feed it four bars of wax as your solid

play46:44

and it melts them and puts them on a rotating drum.

play46:48

And the feature is that it stays the same for years.

play46:57

But it's not very good.

play47:01

AUDIENCE: It might look red on different paper.

play47:05

MARVIN MINSKY: No, I tried it.

play47:07

AUDIENCE: I'm sure if you put it up to a light bulb,

play47:10

we could make it all sorts of colors.

play47:12

MARVIN MINSKY: I think what I'll do is--

play47:17

I saw a phaser on the third floor somewhere.

play47:20

Maybe I'll borrow their red one and see

play47:25

if it's different from mine.

play47:44

Well, let me conclude because I--

play47:52

I think this really raises lots of wonderful questions.

play48:13

And I wonder if we wouldn't--

play48:25

does this make things too easy?

play48:31

I think what happens in the discussions of the philosophers

play48:38

like Pinker and most of the others

play48:46

is that they feel there's a really hard problem, which

play48:50

is what is the sense of being?

play48:52

What does it mean to have an experience,

play48:57

to perceive something?

play49:01

And how they want to argue that somehow--

play49:06

they are saying they can't imagine how anything that has

play49:12

an explanation, how any program or any process

play49:16

or any mechanical system, could feel pain or sorrow or anxiety

play49:24

or any of these things that we call feelings.

play49:28

And I think this is a curious idea that

play49:39

is stuck in our culture, which is that if something is

play49:45

hard to express, it must be because it's

play49:50

so different from anything else, that there's no way

play49:54

to describe it.

play49:56

So if I say, exactly how does it feel to feel pain?

play50:06

Well, if you look at literature, you'll see lots of synonyms

play50:12

like stabbing or griping or aching or you might find 50

play50:18

or--

play50:20

I mentioned this in first lecture I think,

play50:22

that there are lots of words about emotional or--

play50:28

I don't know what to call them--

play50:30

states.

play50:31

But that doesn't mean that they're simple.

play50:34

That means--

play50:37

The reason you have so many words for describing

play50:44

simple states, feelings, and so forth

play50:47

is that not that they are simple and a lot of different things

play50:52

that have nothing to do with one another, but that each of those

play50:55

is a very complicated process.

play50:57

What does it mean when something's hurting?

play51:01

It means it's hard to get anything done.

play51:05

I remember when I first got this insight

play51:07

because I was driving down from Dartmouth to Boston

play51:12

and I had a toothache.

play51:14

And it was really getting very bad.

play51:16

That's why I was driving down because I didn't know what

play51:20

to do and I had a dentist here.

play51:22

And after a while, it's sort of fills up my mind.

play51:26

And I'm saying this is very dangerous because maybe I

play51:31

shouldn't be driving.

play51:33

But if I don't drive, it will get worse.

play51:36

So I really should drive very fast.

play51:44

So what is pain?

play51:46

Pain is a reaction of some very smart parts

play51:49

of your mind to the malfunctioning of other very

play51:54

smart parts.

play51:55

And to describe it you would have

play51:58

to have a really big theory of psychology

play52:02

with more parts than in Freud or in my Society

play52:07

Of Mind, book which has only about 300 pages, each of which

play52:13

describes some different aspect of thinking.

play52:18

So if something takes 300 pages to describe, this fools

play52:24

you into thinking, oh, it's indescribable.

play52:27

It must be elemental.

play52:29

It couldn't be mechanical.

play52:34

It's too simple.

play52:36

If pain were like the four gears in a differential.

play52:44

Well, most humans don't--

play52:45

if you show them a differential, and say

play52:48

what happens if you do this?

play52:52

The average intelligent human being is incapable of saying,

play52:56

oh, I see, this will go that way.

play53:01

A normal person can't understand those four little gears.

play53:06

So, of course, pain seems irreducible,

play53:11

because maybe it involves 30 or 40 parts and another 30 or 40

play53:17

of your little society of mind processes are looking at them.

play53:23

And none of them know much about how the others work.

play53:26

And so the way you get your PhD in philosophy is by saying,

play53:35

oh, I won't even try.

play53:37

I will give an explanation for why I can't do it,

play53:41

which is that it's too simple to say anything about.

play53:45

That's why the word qualia only appears once

play53:52

in The Emotion Machine book.

play53:55

And a lot of people complained about that.

play53:58

They said, why don't you-- why doesn't he--

play54:02

they say, you should read, I forget what instead.

play54:09

Anyway.

play54:20

I don't think I have anything else in this beautiful set of--

play54:28

how did it end?

play54:48

If you look on my web page, which I don't think I can do.

play55:07

Oh, well it will probably-- there.

play55:11

I just realized I could quit Word.

play55:20

Well, there's a paper called "Causal Diversity."

play55:24

And it's an interesting idea of how do you explain--

play55:33

how do you answer questions?

play55:36

If there's some phenomenon going on

play55:43

and something like being in pain is a phenomenon,

play55:49

what do you want to say about it?

play55:52

And here's a little diagram that occurred to me once,

play56:06

which is what kinds of sciences or what

play56:13

kinds of disciplines or ways of thinking

play56:19

do you use for answering different kinds of questions?

play56:24

So I got this little matrix.

play56:28

And you ask, suppose something happens and think of it

play56:35

in terms of two dimensions.

play56:37

Namely the world is in a certain stage.

play56:45

Something happens and the world gets into a different state.

play56:49

And you want to know why things change?

play56:52

Like if I stand this up--

play56:59

oh, I can even balance it.

play57:01

I don't know.

play57:06

No I can't.

play57:09

Anyway, what happened there?

play57:12

It fell over.

play57:15

And you know the reason.

play57:20

If it were perfectly centered, it might stand there forever.

play57:26

Or even if it were perfectly balanced,

play57:31

there's a certain quantum probability

play57:34

that its position and momentum are conjugate.

play57:40

So even if I try to position it very precisely,

play57:46

it will have a certain momentum and eventually fall over.

play57:53

It might take a billion years or it might be a few seconds.

play57:59

So if we take any situation, we could

play58:01

ask how many things are affecting

play58:07

the state of this system and how large are they?

play58:10

So how many causes, a few causes or a lot?

play58:17

And what are the effects of each of those?

play58:20

So a good example is a gas, if you

play58:24

add a cylinder and a piston.

play58:42

And if it's this size, then there

play58:48

would probably be a few quadrillion

play58:52

or trillion anyway molecules of air, mostly oxygen and nitrogen

play59:00

and argon there.

play59:03

And every now and then, they would all

play59:07

happen to be going this way instead of this way.

play59:11

And the piston would move out.

play59:13

And it probably wouldn't move noticeably in a billion years.

play59:20

But eventually it would.

play59:22

But anyway, there is a phenomenon

play59:25

where there is a very large number of causes, each of which

play59:28

has a very small effect.

play59:31

And what kind of science or what kind of computer program

play59:38

or whatever would you need to do to predict what will happen

play59:44

in each of those situations?

play59:46

So if there's a very few causes and their effects are small,

play59:52

then you just add them up.

play59:53

Nothing to it.

play59:55

If there is a very large number of causes

play59:57

and each has a large effect, then go home.

play60:04

There's nothing to say because any of those causes

play60:09

might overcome all the others.

play60:13

So I found nine states.

play60:15

And if there are a large number of small causes,

play60:20

then neural networks and fuzzy logic

play60:23

might be a way to handle a situation like that.

play60:27

And if there is a very small number of large causes,

play60:31

then some kind of logic will work.

play60:38

Sometimes there are two causes that are XOR-ed.

play60:40

So if they're both on, nothing happens.

play60:43

If they're both off, nothing happens.

play60:45

And if just one is on, you get a large effect.

play60:49

And you just say it's X or Y, and analogies

play60:57

and example-based reasoning.

play61:04

So these are where AI is good, I think.

play61:21

And for lots of everyday problems like the easy ones

play61:28

or large numbers of small effects,

play61:31

you can use statistics.

play61:33

And small numbers of large effects,

play61:36

you can use common sense reasoning and so forth.

play61:40

So this is the realm of AI.

play61:42

And of course, it changes every year

play61:45

as you get better or worse at handling things like these.

play61:50

If you look at artificial intelligence today,

play61:55

it's mostly stuck up here.

play61:59

There are lots of places you can make money by not

play62:03

using symbolic reasoning.

play62:05

And there are lots of things, which

play62:09

are pretty interesting problems here.

play62:12

And of course, what we want to do is get to this region

play62:19

where the machines start solving problems

play62:23

that people are no good at.

play62:31

So who has a question or a complaint?

play62:35

AUDIENCE: I have a question.

play62:37

MARVIN MINSKY: Great.

play62:38

AUDIENCE: That consciousness again.

play62:41

Would it have been easier--

play62:43

MARVIN MINSKY: Is this working?

play62:45

No.

play62:46

AUDIENCE: It goes to the camera.

play62:48

MARVIN MINSKY: Oh.

play62:49

AUDIENCE: You can hand it to him.

play62:53

MARVIN MINSKY: OK, well I'll try to repeat it.

play62:56

AUDIENCE: Would it have bit easier

play62:57

if we never created the suitcase, as you put it

play63:01

in the papers, the suitcase of consciousness,

play63:05

and just kept those individual concepts?

play63:07

The second part of that question is,

play63:10

how do we know this is what they had in mind

play63:13

when they initially created the word consciousness?

play63:18

MARVIN MINSKY: That's a nice question.

play63:21

Where did the word consciousness come from?

play63:23

And would we be better off if nobody had that idea?

play63:28

I think I talked about that a little bit

play63:30

the other day that there's the sort of legal concept

play63:39

of responsibility.

play63:42

And if somebody decided that they would steal something,

play63:51

then they become a thief.

play63:55

And so it's a very useful idea in society

play64:04

for controlling people to recognize

play64:09

which things people do are deliberate and involve

play64:16

some reflection and which things are because they're learnable.

play64:29

It's a very nice question.

play64:30

Would it be better if we had never had the word?

play64:33

I think it might be better if we didn't have it in psychology.

play64:37

But it's hard to get rid of it for social reasons,

play64:41

just because you have to be able to write down

play64:47

a law in some form that people can reproduce.

play65:01

I'm trying to think of a scientific example

play65:03

where there was a wrong term that--

play65:19

can anybody think of an example of a concept that held science

play65:25

back for a long time?

play65:27

Certainly the idea that astronomical bodies

play65:32

had to go in circles, because the idea of ellipses

play65:37

didn't occur much till Kepler.

play65:41

Are there ellipses--

play65:43

Euclid knew about ellipses, didn't he?

play65:50

Anybody know?

play65:51

If you take a string and you put your pencil in there

play66:01

and go like that, that's a terrible ellipse.

play66:13

people knew about ellipses.

play66:15

Certainly Kepler knew it, but didn't invent it.

play66:26

So I think the idea of free will is a social idea.

play66:32

And well, we certainly still have it.

play66:37

Most educated people think there is such a thing.

play66:44

It's not quite as--

play66:45

just as most people think there's

play66:47

such a thing as consciousness, instead of 40 fuzzy sets.

play66:55

How many of you believe in free will?

play67:00

AUDIENCE: My free will.

play67:04

MARVIN MINSKY: It's the uncaused cause.

play67:08

Free will means you can do something for no reason at all.

play67:13

And therefore you're terribly proud of it.

play67:20

It's a very strange concept.

play67:30

But more important, you can blame people for it

play67:33

and punish them.

play67:36

If they couldn't help doing it, then there's

play67:38

no way you can get even.

play67:45

AUDIENCE: It has the implication that there is a choice.

play67:48

MARVIN MINSKY: Yeah.

play67:55

I suppose for each agent in the brain,

play67:58

there's a sort of little choice.

play68:01

But it's it has several inputs.

play68:11

but I don't think the word choice means anything.

play68:14

AUDIENCE: Well, you have the relationship between free will

play68:16

and randomness.

play68:19

Certainly there are some things that start as random processes

play68:23

and turn out to be causes.

play68:27

MARVIN MINSKY: Well, random things

play68:29

have lots of small causes.

play68:41

So random is over here, many small causes.

play68:47

And so you can't figure out what will happen,

play68:52

because even if you know 99 of those causes,

play68:57

you don't know what the 100th one is.

play68:59

And if they all got XOR-ed by a very simple

play69:03

deterministic logic, then you're screwed.

play69:07

So but again, it's illegal the freedom of will is.

play69:13

It just doesn't make sense to punish people

play69:16

for things they didn't decide to do,

play69:19

if it happened in a part of the nervous system that

play69:22

can't learn.

play69:25

If they can't learn, then you can put them in jail

play69:29

so that they won't be able to do it again.

play69:33

But you'd have to--

play69:37

but the chances are it's not going

play69:38

to change the chance that they'll try to do it

play69:41

if it's in fact a random.

play69:45

Did you have-- yeah.

play69:46

AUDIENCE: So machine learning has been on for a long time

play69:53

and like processors are really fast right now,

play69:56

like computers are really fast.

play69:59

Do you believe there is some mistake like people

play70:05

that do research should learn?

play70:07

I mean the--

play70:08

MARVIN MINSKY: Well, machine learning

play70:09

is to me it's an empty expression.

play70:13

Do you mean, are they doing some Bayesian reasoning or--

play70:18

I mean nobody does machine learning.

play70:20

Each person has some particular idea

play70:24

about how to make a machine improve

play70:28

its performance by experience.

play70:31

But it's a terrible expression.

play70:35

AUDIENCE: So like, statistical methods like improving methods

play70:42

to machine learning to the machine

play70:45

to infer like what point will belong to a data set

play70:51

or whatever?

play70:52

MARVIN MINSKY: Sure.

play70:55

AUDIENCE: People that do that, do

play70:57

you think they are doing some mistake?

play70:59

Like do you think there would be more advance into representing

play71:08

intelligence in another way and try to program that?

play71:11

MARVIN MINSKY: The problem is this.

play71:13

Suppose you have-- here's some system that

play71:24

has a bunch of gadgets that affect each other,

play71:31

just a lot of interactions and dependencies.

play71:35

And you want to know if it's in a certain state, what

play71:40

will be the next state.

play71:42

So suppose you put a lion and a tiger in a cage.

play71:51

And how do you predict what will happen?

play71:55

Well, what you could do is if you've

play71:59

got a million lions and a million tigers and a million

play72:02

cages, then you could put a lion and a tiger in each cage.

play72:09

And then you could say the chances that the tiger will win

play72:14

is 0.576239 because, that's how many cases the tiger won.

play72:29

And the lion will win--

play72:33

I don't know-- that many.

play72:36

So to me, that's what statistical learning is.

play72:40

It has no way to make smart hypotheses.

play72:45

So to me, anybody who's working on statistical learning

play72:49

is very smart.

play72:51

And he's doing what we did in 1960

play72:55

and quit, 50 years out of date.

play73:00

What you need is a smart way to make a hypothesis

play73:04

about what's going on.

play73:06

Now if nothing's going on except rounding and motion,

play73:10

then statistical learning is fine.

play73:13

But if there's an intricate thing like a differential,

play73:17

which is this thing and that thing summing up

play73:22

in a certain way, how do you decide

play73:28

to find the conditional probability of that hypothesis?

play73:34

And so in other words, you can skim the cream off the problem

play73:45

by finding the things that happened with high probability,

play73:49

but you need to have a theory of what's

play73:52

happening in there to conjecture that something

play73:57

of low probability on the surface will happen.

play74:01

And I just--

play74:04

So here's the thing.

play74:06

If you have a theory of statistical learning,

play74:09

then your job is to find an example that it works on.

play74:13

It's the opposite of what you want for intelligence, which

play74:18

is, how do you make progress on a problem that you don't know

play74:24

the answer to or what kind of answer?

play74:27

So how did they generate?

play74:28

I don't know.

play74:30

Are you up on--

play74:32

how do the statistical Bayesian people

play74:35

decide which conditional probability to score?

play74:43

Suppose these 10 variables, then there's

play74:46

2 to the 10th or 1,000 conditional probabilities

play74:51

to consider.

play74:53

If there's 100 variables--

play74:55

and so you can do it.

play74:57

2 to the 10th is nothing.

play75:00

And a fast computer can do many times 1,000 things per second.

play75:07

But suppose it is 100 variables 2 the 100 is 10 to the 30.

play75:15

No computer can do that.

play75:19

So I'm saying statistical learning is great.

play75:28

It's so smart.

play75:32

How do--

play75:43

I'm repeating myself.

play75:44

Anybody have an argument about that?

play75:47

I bet several of you are taking courses

play75:49

in statistical learning.

play75:51

What did they say about that problem?

play75:52

AUDIENCE: Trial and error.

play75:54

MARVIN MINSKY: What?

play75:55

AUDIENCE: Largely trial and error.

play75:57

MARVIN MINSKY: Yeah, but what do you try when it's 10 to 30th?

play76:01

Yeah.

play76:05

So do they say, I quit, this theory is not

play76:08

going to solve hard problems.

play76:12

So once you admit that, and say I'm

play76:15

working on something that will solve

play76:16

lots of easy problems, more power to you.

play76:20

But please don't teach it to my students.

play76:26

AUDIENCE: What do you think about the relationship

play76:28

of statistical inference methods?

play76:30

MARVIN MINSKY: I can't hear you.

play76:40

So in other words, the statistical learning people

play76:43

are really in this place, and they're wasting our time.

play76:47

However, they can make billions of dollars

play76:50

solving easy problems.

play76:54

There's nothing wrong with it.

play76:57

It just has no future.

play77:00

AUDIENCE: What do you think about the relationship

play77:02

between statistical learning methods?

play77:03

MARVIN MINSKY: Of what?

play77:04

AUDIENCE: The relation between statistical learning method

play77:06

and maybe something--

play77:07

MARVIN MINSKY: I couldn't get the fourth one.

play77:09

AUDIENCE: Relationship of statistical--

play77:12

MARVIN MINSKY: Statistical, oh.

play77:13

AUDIENCE: --to more abstract ideas like boosting

play77:18

or something where the method they are using at one

play77:21

and they--

play77:22

MARVIN MINSKY: There's a very simple answer for that.

play77:34

It's inductive probability.

play77:36

There is a theory.

play77:46

I wonder if anybody could summarize that nicely.

play77:50

Have you tried?

play77:51

AUDIENCE: Basically--

play77:53

MARVIN MINSKY: I can try it next time.

play77:55

AUDIENCE: You should assume that everything

play77:58

is generated by a program.

play78:00

And your prior over the space possible program

play78:04

should be the description length of the program.

play78:08

MARVIN MINSKY: Suppose there is a set of data, then

play78:10

what's the shortest description you can make of it?

play78:14

And that will give you a chance of having a very

play78:19

good explanation.

play78:22

Now what Solomonoff did was say, suppose that something's

play78:26

happened, and you make all possible descriptions of what

play78:35

could have happened, and then you take the shortest one,

play78:40

and see if that works and see what it predicts

play78:46

will happen next.

play78:48

And then you take--

play78:50

say, it's all binary, then there's

play78:53

two possible descriptions that are one bit longer.

play78:56

And maybe one of them fits the data.

play78:58

And the other doesn't.

play79:01

So you give that one half the weight.

play79:06

And so Solomonoff imagines an infinite sum

play79:11

where you take all possible computer programs

play79:15

and see which of them produce that data set.

play79:21

And if they produce that data set,

play79:23

then you run the program one more step and see what it does.

play79:27

In other words, suppose your problem

play79:29

is you see a bunch of data about the history of something,

play79:34

like what was the price of a certain stock

play79:36

for the last billion years, and you

play79:39

want to see will it go up or down tomorrow.

play79:43

Well, you make all possible descriptions of that data set

play79:47

and weight the shortest ones much more

play79:55

than the longer descriptions.

play79:57

So the trouble with that is that you can't actually

play80:02

compute such things because it's sort of an uncomputable.

play80:09

However, you can use heuristics to approximate it.

play80:14

And so there are about a dozen people

play80:17

in the world who are making theories of how

play80:21

to do Solomonoff induction.

play80:25

And that's where--

play80:28

Now another piece of advice for students

play80:30

is if you see a lot of people doing something,

play80:34

then if you want to be sure that you'll have a job someday,

play80:41

do what's popular, and you've got a good chance.

play80:44

If you want to win a Nobel Prize,

play80:47

or solve an important problem, then

play80:50

don't do what's popular because the chances are you'll just be

play80:55

a frog in a big pond of frogs.

play81:00

So I think there's probably only half a dozen people

play81:08

in the world working on Solomonoff induction,

play81:11

even though it's been around since 1960.

play81:21

Because it needs a few more ideas on how to approximate it.

play81:28

But unless you want to make a living,

play81:31

don't do Bayesian learning.

play81:37

Yeah.

play81:41

AUDIENCE: I don't know if this actually works.

play81:44

But if you take like Bayesian learning and we kind of advice

play81:48

sometimes like let's say we see something

play81:52

with very small probability and we type

play81:56

just like this part of that is never considered any good.

play81:59

Would that kind of like be like what

play82:01

we're trying to do with getting representations and things?

play82:05

I mean--

play82:05

MARVIN MINSKY: Yeah, I think--

play82:07

AUDIENCE: Would this make it much more discrete

play82:10

and kind of make it much more easier and more attractable?

play82:12

Or is it like--

play82:14

my question would be, is it really

play82:16

representations for things saying,

play82:19

this chair has this representation.

play82:20

Isn't that kind of doing the same like kind

play82:23

of statistical model, but just throwing away

play82:26

a lot of the stuff that we might not want look at,

play82:31

what we consider as things that shouldn't be looked at?

play82:35

MARVIN MINSKY: I think--

play82:41

say there's the statistical thing

play82:44

and there's the question of--

play82:49

suppose there's a lot of variables x1, x2,

play82:55

x 10 to the ninth, 10 to the fifth.

play83:02

Let's say there's 100,000 variables.

play83:07

Then, there's 2 to the 100,000 Pijs.

play83:23

But it isn't ij, it's ij up to 10,000 subscript.

play83:32

So what you need is a good idea for which things to look at.

play83:36

And that means you want to take commonsense knowledge

play83:39

and jump out of the Bayesian knowledge.

play83:43

The problem with a Bayesian learning system

play83:45

is you're estimating the values of conditional probabilities.

play83:51

But you have to decide which conditional probabilities

play83:54

to estimate the values.

play83:57

And the answer is--

play84:00

oh, look at it another way.

play84:02

Look at history and you'll see 1,000 years go by,

play84:08

what was the population of the world

play84:11

between 500 AD, between the time of Augustine

play84:16

and the time of Newton, or 1500, like O'Brien, those people,

play84:23

1,000 years?

play84:25

And I don't know is there 100 million people in the world,

play84:28

anybody know?

play84:30

About how many people were there in 1500?

play84:38

Don't they teach any history?

play84:43

I think history starts--

play84:45

I changed schools around third grade.

play84:50

So I never-- there was no European history.

play84:53

So to me American history is recent

play85:00

and European history is old.

play85:03

So 1776 is after 1815.

play85:10

That is, to me, history ends with Napoleon,

play85:13

because then I got into fourth grade.

play85:18

Don't you all have that?

play85:21

You've got gaps in your knowledge because the curricula

play85:25

aren't--

play85:26

somebody should make a map of those.

play85:30

AUDIENCE: There were about half a billion people in 1500.

play85:32

MARVIN MINSKY: That's a lot.

play85:33

AUDIENCE: Yeah, I found it on the internet.

play85:36

MARVIN MINSKY: This is from Google?

play85:37

AUDIENCE: This is from Wikipedia.

play85:40

MARVIN MINSKY: Well.

play85:41

AUDIENCE: It's on the timeline of people.

play85:44

MARVIN MINSKY: OK.

play85:45

So there's half a billion people,

play85:47

not thinking of the planets going in eclipses.

play85:56

So why is that?

play85:59

How is a Bayesian person going to make the right hypothesis

play86:05

if it's not in the algebraic extension of the things

play86:12

they're considering?

play86:16

I mean, it could go and it could look it up in Wikipedia.

play86:20

But Bayesian thing doesn't do that.

play86:24

RAIs will.

play86:26

Yeah.

play86:28

AUDIENCE: But when we are kids, don't we learn the common sense

play86:31

knowledge?

play86:32

MARVIN MINSKY: Well--

play86:33

I'm saying what happened in the 1,000 years?

play86:38

You actually have to tell people to consider.

play86:42

I'm telling the Bayesians to quit that and do

play86:45

something smart.

play86:46

Somebody has to tell them.

play86:48

I don't meet up with Newton.

play86:50

But they need one.

play86:56

What are they doing?

play86:59

What do they hope to accomplish?

play87:01

How are they going to solve a hard problem.

play87:06

Well, they don't have to.

play87:09

The way you predict the stock market today

play87:12

is Bayesian with the reaction time or the millisecond.

play87:16

And you can get all the money from the poor people that

play87:20

were investing in your bank.

play87:23

It's OK, who cares?

play87:27

But maybe it shouldn't be allowed.

play87:28

I don't know.

play87:39

Yeah.

play87:41

AUDIENCE: Do you think the goal is

play87:43

to replace human intelligence that

play87:45

can create a computer that will be

play87:47

able to reason by itself or is there also the ability

play87:50

to create a system--

play87:52

MARVIN MINSKY: It have to stop getting sick and dying

play87:54

and becoming senile.

play87:57

Yes.

play87:58

Now there are several ways to fix this.

play88:02

One is to freeze you and just never thaw you out.

play88:07

But we don't want to be stuck with people like us

play88:12

for the rest of all time, because, you know,

play88:16

there isn't much time left.

play88:18

The sun is going to be a red giant in three billion years.

play88:24

So we have to get out of here.

play88:26

And the way to get out of here is make yourself

play88:30

into smart robots.

play88:39

Help.

play88:42

Let's get out of this.

play88:43

We have to get out of these bodies.

play88:48

Yeah.

play88:49

AUDIENCE: So you talked a lot about emotions.

play88:52

But emotions you described as like states of mind.

play88:57

And if you have like, for, example

play88:59

n states of mind that represent--

play89:03

I don't know-- log n bits of information, why

play89:07

should we spend so much time talking

play89:12

about like so new information?

play89:16

MARVIN MINSKY: Talking about?

play89:17

AUDIENCE: Little information.

play89:18

Like if we had n states or n emotions,

play89:23

they would represents log n bits of information.

play89:27

And like that's very different information that they will see.

play89:31

So for example if I'm happy or sad,

play89:37

like if I had just two states, happy or sad?

play89:41

MARVIN MINSKY: If we just had two states,

play89:43

you couldn't compute anything.

play89:46

I'm not sure what you're getting at.

play89:49

AUDIENCE: Like emotions seem too little information.

play89:54

They don't represent much information inside our brain.

play90:00

Why should they be so important in intelligence since they--

play90:05

MARVIN MINSKY: I don't think--

play90:07

I think emotions generally are important for lizards.

play90:12

I don't think they're important for humans.

play90:16

AUDIENCE: Like if we--

play90:17

MARVIN MINSKY: You have to stay alive to think.

play90:20

So you've got a lot of machinery that makes sure

play90:24

that you don't starve to death.

play90:26

So there's gadgets that measure your blood sugar

play90:30

and things like that and make sure that you eat.

play90:33

So those are very nice.

play90:36

On the other hand, if you simplified it,

play90:39

you just need three volts to run the CPU.

play90:43

And then you don't need all that junk.

play90:48

AUDIENCE: So they're not very important for us.

play90:51

It's just--

play90:52

MARVIN MINSKY: They are only important to keep you alive.

play90:54

AUDIENCE: Yeah.

play90:55

MARVIN MINSKY: But they don't help you write your thesis.

play91:07

I mean, the people who consider such questions are the science

play91:15

fiction writers.

play91:18

So there are lots of thinking about what kind of creatures

play91:23

could there be besides humans.

play91:26

And if you look at detective stories or things,

play91:31

then you find that there are some good people and bad people

play91:35

and stuff like that.

play91:37

But to me, general literature is all the same.

play91:41

When you've read 100 books, you've read them all,

play91:45

except for science fiction.

play91:47

That's my standard joke, that I don't think much of literature

play91:53

except--

play91:56

because the science fiction people say

play91:59

what would happen if people had a different set of emotions

play92:03

or different ways to think?

play92:06

Or one of my favorite ones is Larry Niven and Jerry

play92:12

Pournelle, who just wrote a couple of volumes about what

play92:16

about a creature that has one big hand in two little hands.

play92:22

Do you remember what it's called?

play92:25

The Gripping Hand.

play92:28

This is for holding the work, while this one

play92:30

holds the soldering iron and the solder.

play92:35

That's right.

play92:37

That's how the book sort of begins.

play92:40

And there is imagination.

play92:42

On the other hand, you can read Jane Eyre.

play92:45

And it's lovely.

play92:49

But do you end up better than you are or slightly worse?

play92:56

And if you read hundreds of them--

play92:58

luckily she only wrote 10, right?

play93:05

I'm serious.

play93:08

You have to look at Larry Niven and Robert

play93:11

Heinlein and those people.

play93:15

And when you look at the reviews by the literary people,

play93:18

they say the characters aren't developed very well.

play93:23

Well, foo, the last thing you want in your head

play93:28

is a well-developed literary character.

play93:32

What would you do with her?

play93:39

Yes.

play93:42

I love your questions.

play93:44

Can you wake them up?

play93:50

AUDIENCE: When we are small babies,

play93:53

like we kind of are creating this common sense knowledge.

play93:59

And we have a lot of different inputs.

play94:01

So for example I'm talking to you,

play94:03

there is this input of the sound, the vision,

play94:06

like all these different inputs.

play94:08

Aren't we so involved when we are babies,

play94:14

like in very positive relations between these inputs?

play94:20

For example, the K-lines, is it like the machine learning guys

play94:27

argue that with a lot of variables

play94:31

and maybe 10 to the third was small set.

play94:34

What would be the difference if you go deep down?

play94:38

Are they trying to find like a very simple path?

play94:43

MARVIN MINSKY: I think you're right in the sense that I'll

play94:46

bet that if you take each of those highly advanced brain

play94:50

centers, and say, well it's got something generating

play94:54

hypotheses maybe or something.

play94:57

But underneath it, you probably have something very

play95:00

like a Bayesian reinforcement thing.

play95:03

So they're probably all over the place and maybe

play95:08

of 90% of your machinery is made of little ones.

play95:11

But it's the symbolic things and the K-lines that give them

play95:16

the right things to learn.

play95:18

But I think you raise another question, which

play95:24

I'm very sentimental about because of the history of how

play95:31

our projects got started, namely nobody knew much about how

play95:38

children develop in 1900.

play95:44

For all of human history, as far as I know,

play95:48

generally babies are regarded as like ignorant adults.

play95:55

There isn't I there aren't much theories

play96:00

of how children develop.

play96:02

And it isn't till 1930 that we see any real substantial child

play96:10

psychology.

play96:12

And the child psychology is mostly

play96:15

that one Swiss character, Jean Piaget.

play96:27

It's pronounced John for some reason.

play96:37

And he had three children and observed them.

play96:44

I think his first publication was something about mushrooms.

play96:53

He had been in botany.

play96:55

Is that right?

play96:56

Can anybody remember?

play96:58

Cynthia, do you remember what Piaget's original?

play97:01

AUDIENCE: Biology.

play97:02

MARVIN MINSKY: Something.

play97:04

But then he studied these children

play97:06

and he wrote several books about how they learned.

play97:09

And as far as I know, this is about the first time in history

play97:14

that anybody tried to observe infants very closely

play97:19

and chart how they learned and so forth.

play97:23

And my partner, Seymour Papert, was Piaget's assistant

play97:32

for several years before he came to MIT.

play97:36

And we started the--

play97:39

I started the artificial intelligence group

play97:42

with John McCarthy who had been one

play97:44

of my classmates in graduate school at Princeton in math,

play97:48

actually.

play97:50

then McCarthy went to start another AI group in Stanford

play97:55

and Seymour Papert appeared on my scene just the same time.

play98:00

And it was a kind of miracle because we had both--

play98:06

we met in some meeting in London where we both presented

play98:09

the same machine learning paper on Bayesian probabilities

play98:15

in some linear learning system.

play98:19

We both hit it off because we obviously the same way.

play98:25

But anyway Piaget had been one of the principal people

play98:31

conducting the experiments on young children in Piaget's

play98:35

group.

play98:36

And when Piaget got older and retired in about 1985, Cynthia,

play98:43

do you remember when did Piaget quit?

play98:48

It's about when we started.

play98:50

AUDIENCE: Didn't he die in 1980 or something.

play98:52

MARVIN MINSKY: Around then.

play98:55

There were several good researchers there.

play98:57

AUDIENCE: He was trying to get Seymour to take over.

play99:00

MARVIN MINSKY: He wanted Seymour to take over at some point.

play99:03

And there were several good people there, amazing people.

play99:07

But the Swiss government sort of stopped supporting it.

play99:12

And the greatest laboratory on child psychology in the world

play99:18

faded away.

play99:19

It's closed now.

play99:22

And nothing like it ever started again.

play99:26

So there's a strange thing, maybe the most important part

play99:31

of human psychology is what happens the first 10

play99:33

years, first 5 years?

play99:36

And if you're interested in that,

play99:39

you could find a few places where somebody

play99:41

has a little grant to do it.

play99:43

But what a tragedy.

play99:46

Anyway, we tried to do some of it here.

play99:49

But Papert got more interested in--

play99:53

and Cynthia here-- got more interested in how

play99:57

to improve early education than find out how children worked.

play100:08

Is there any big laboratory at all doing that any?

play100:12

Where is child psychology?

play100:13

There are a few places, but none of them

play100:18

are famous enough to notice.

play100:21

AUDIENCE: For a while there was stuff in Ontario,

play100:25

and Brazelton.

play100:26

MARVIN MINSKY: Brazelton Yeah.

play100:29

Anyway.

play100:30

It's curious because you'd think that

play100:33

would be them one of the most important things,

play100:37

how do humans develop?

play100:41

It's very strange.

play100:43

Yeah.

play100:43

AUDIENCE: So like infants, when they are about a year old,

play100:48

I think there's a favorite moment,

play100:49

like they learn how their goal, like how to achieve goals,

play100:55

like rock the knees.

play100:57

And then after one year, they learn how to clap,

play101:01

how to achieve a means.

play101:02

So for example, I think they do the experiment

play101:04

of putting like a hand in their ear, like left ear.

play101:09

And then chimpanzees do the same as one-year-old infants.

play101:19

And somehow I believe that, for example,

play101:24

reflexes between infants and chimpanzees are very similar.

play101:30

We tend to represent things better,

play101:33

because like we have this--

play101:36

MARVIN MINSKY: You're talking about chimps?

play101:38

AUDIENCE: Chimpanzees.

play101:39

MARVIN MINSKY: Yep.

play101:39

AUDIENCE: They are like apes in general.

play101:40

MARVIN MINSKY: Right.

play101:41

AUDIENCE: I believe there are some apes that

play101:45

can learn sign language.

play101:46

I am not sure if that's right.

play101:49

But they can take the goals.

play101:52

And, for example, dogs can achieve a goal.

play101:55

But they can't imagine themselves like each moment.

play102:05

Maybe that's because of how they represent things,

play102:08

maybe they represent badly.

play102:11

They don't have good hierarchy.

play102:13

MARVIN MINSKY: There's some very interesting questions

play102:15

about that.

play102:16

That's why we need more laboratories.

play102:19

But here's an example.

play102:26

We had a researcher at MIT named Richard Heald.

play102:30

And he did lots of interesting experiments on young animals.

play102:35

So for example, he discovered that if you

play102:38

take a cat or a dog, if you have a dog on a leash

play102:44

and you take it somewhere, there's

play102:47

a very good chance it will find its way back because it

play102:50

remembers what it did.

play102:52

But he discovered if you take a cat or a dog

play102:54

and you take it for a walk and go somewhere,

play103:01

it won't learn because it didn't do it itself.

play103:07

So in other words, if you take it on a road passively,

play103:13

even a dozen times or 100 times, it

play103:16

won't learn that path, if it didn't actually

play103:18

have any motor reactions.

play103:21

So that was very convincing.

play103:23

And the world became convinced that for spatial learning,

play103:27

you have to participate.

play103:33

Many years later, we were working

play103:38

with a cerebral palsy guy who had never

play103:44

locomoted himself very much.

play103:47

I'm trying to remember his name-- well,

play103:51

name doesn't matter.

play103:53

But the Logo project had started.

play103:57

And he by putting a hat with a stick on his head,

play104:06

he was able to type keys, which is really

play104:10

very boring and tedious.

play104:13

And believe it or not, even though he could barely talk,

play104:19

he quickly learned to control the turtle,

play104:23

a floor turtle, which you could tell its turn left and right

play104:29

and go forward one unit, stuff like that.

play104:33

And the remarkable thing was that no sooner did

play104:37

he start controlling this turtle,

play104:42

than the turtle went over here and he turned it around

play104:50

and he wanted it to go back to here.

play104:57

And everybody predicted that he would

play105:00

get left and right reversed because he had never had

play105:04

any experience in the world.

play105:06

But right off, he knew which way to do it.

play105:13

So he had learned spatial navigation

play105:16

pretty much never having done much of it himself.

play105:20

And Richard Heald was very embarrassed,

play105:25

but had to conclude that what you learned from cats and dogs

play105:29

might not apply to people.

play105:33

We ran into a little trouble because there

play105:38

was another psychologist we tried to convince of this.

play105:44

And that psychologist said, well, maybe this

play105:47

was-- it took three years for him to develop a lot of skills.

play105:54

And the psychologist said, well, maybe that's a freak.

play106:01

I won't approve your PhD thesis until you do a dozen of them.

play106:11

I didn't mention the psychologist name, because--

play106:20

Anyway, so we had a sort of Piaget like laboratory.

play106:24

But we never worked with infants.

play106:26

Did we?

play106:33

You think it would be a big industry.

play106:37

Nixon once came around and asked.

play106:42

There was a great senator in Massachusetts,

play106:43

I forget his name.

play106:47

He said, what can we do for education?

play106:50

The senator said, research on children how they learn.

play106:55

And Nixon said, that's great idea.

play106:58

Let's put a billion dollars into it.

play107:01

And he couldn't convince anybody in his party

play107:05

to support this idea.

play107:10

The only good thing I've heard about Nixon except for opening

play107:13

China, I guess.

play107:16

He was determined to do something

play107:19

about early education.

play107:20

Oh, the teachers union couldn't stand it.

play107:26

He didn't get any support from the education business

play107:32

I'll probably remember the senator next.

play107:45

Who's next?

play107:49

Yes.

play107:50

AUDIENCE: So it's kind of along the same lines.

play107:52

So if we come out thinking about how we represent things,

play107:58

even if we think about language itself,

play108:02

so the early, early stages of learning the language

play108:05

obviously have a lot of this statistical learning involved

play108:10

where we learned morphology of the language,

play108:14

rather than learning that language is actually

play108:17

representing things.

play108:18

So for example, if we're going to learn

play108:25

like how certain letters come one after the other

play108:29

or how they go, we kind of listen

play108:32

and we see that it's the way everyone else does it.

play108:37

And there are certain words that exist

play108:39

and certain words don't exist, even if they could exist.

play108:42

I guess these are all like statistical learning.

play108:45

And then like after this structure is there,

play108:49

we use this structure to make this representation.

play108:55

So isn't it-- like wouldn't it kind

play108:57

of be right to say that these two are basically

play109:01

the same thing, just that the representation, the more

play109:07

complex one is just another version

play109:10

of the statistical learning where we've just done it?

play109:16

MARVIN MINSKY: Well there's context free grammar.

play109:18

And there's the grammars that have push down lists and stacks

play109:24

and things like that.

play109:25

So you actually need something like a programming language

play109:33

to generate and parse sentences.

play109:39

There's a little recursive quality.

play109:43

I don't know how you can--

play109:44

it's hard to represent that in a Bayesian network

play109:48

unless you have a push down stack.

play109:52

The question is does the brain have

play109:56

push down stacks or are they only three deep or something?

play110:00

Because if you say this is the dog that

play110:02

bit the cat that chased the rat that so on,

play110:07

nobody has any trouble.

play110:09

And that's a recursion.

play110:10

But if you say this is the dog that the cat

play110:13

that the rat bit ate, people can't that.

play110:19

AUDIENCE: It's empirical evidence

play110:20

that the brain got its tail cut off.

play110:22

MARVIN MINSKY: That it's what?

play110:23

AUDIENCE: The brain influenced tail cut off representation.

play110:26

MARVIN MINSKY: Yeah.

play110:27

Why is language restricted in that you can't embed clauses

play110:33

past the level of two or three, which Chomsky never admitted.

play110:39

AUDIENCE: Can't it be the case that we also learn that?

play110:43

Like we also learned that certain patterns can only

play110:48

exist between words.

play110:54

We do parse it using your parse tree.

play110:57

We learn using a parse tree.

play111:00

Like we learn that when you hear a sentence,

play111:04

go after trying to parse it using

play111:07

three words, two words, four words and just try that,

play111:10

see if it works.

play111:11

If it doesn't try another way.

play111:14

It can't distance itself from like learning

play111:17

the number of words that usually happen in a clause.

play111:20

Is it this type of learning?

play111:23

MARVIN MINSKY: Well I'm not sure why

play111:26

is it very different from learning that you

play111:30

have to open a bottle--

play111:32

open the box before you take the thing out.

play111:37

We learn procedures.

play111:38

I'm not sure--

play111:42

I don't believe in grammar, that is.

play111:44

AUDIENCE: If we were trying to teach a machine

play111:47

to be like a human being, would we just lay out the very basics

play111:51

and let it grow like a child with learning

play111:55

or would we put these representations in there,

play111:58

like put the representations--

play112:00

MARVIN MINSKY: Well, a child doesn't learn language

play112:02

unless there are people to teach it.

play112:04

AUDIENCE: Right.

play112:05

MARVIN MINSKY: However--

play112:06

AUDIENCE: So maybe we can expose the machine to that white board

play112:10

to--

play112:11

or we can expose it to the world somehow to some kind of input.

play112:17

MARVIN MINSKY: I'm not sure what question you're asking.

play112:22

Is all children's learning of a particular type

play112:26

or are they learning frames or are they learning grammar rules

play112:30

or do you want a uniform theory of learning?

play112:34

AUDIENCE: I think which one is a better approach,

play112:36

that the machine has very basic things and it learns?

play112:41

So there's a machine, should we makes machines as infants

play112:46

and let them learn things, by for example giving them

play112:49

a string that's from the internet,

play112:53

from communication over the internet

play112:54

or communication among other human beings,

play112:56

just like a child learns from seeing his parents talk.

play112:59

MARVIN MINSKY: Several people have--

play113:00

AUDIENCE: Is it better to actually

play113:02

inject all that knowledge into the machine,

play113:04

and then expect it to act on it from the beginning?

play113:07

MARVIN MINSKY: Well, if you look at the history,

play113:09

you'll find that--

play113:11

I'm not sure how to look it up.

play113:13

But quite a few people have tried

play113:15

to make learning systems that start with very little

play113:19

and keep developing.

play113:21

And the most impressive ones were the ones by Douglas Lenat.

play113:39

But eventually he gave up.

play113:41

And he had systems that learned a few things.

play113:46

But they petered out.

play113:48

And he changed his orientation to trying to build up

play113:52

commonsense libraries.

play113:54

But I'm trying to think of the name

play113:59

for self-organizing systems.

play114:04

There are probably a dozen.

play114:07

If you're interested, I'll try to find some of them.

play114:11

But for some reason people have given up on that,

play114:15

and so certainly worth a try.

play114:25

As for language, I think the theory that language is based

play114:30

on grammar is just plain wrong.

play114:33

I suspect it's based on certain kinds of frame manipulation

play114:39

things.

play114:39

And the idea of abstract syntax is really not very productive

play114:47

or it hasn't--

play114:49

anyway.

play114:51

Because you want it to be able to fit into a system for

play114:54

inference as well.

play114:58

I'm just bluffing here.

play115:02

Did you have a question?

play115:03

AUDIENCE: I was just going to say

play115:04

it seems that what you're saying might

play115:06

be considered to be a form of example-based reasoning.

play115:10

You just have lots and lots of examples,

play115:12

which are not unlike the work that DuBois

play115:15

does with a child the word water from hearing lots of people

play115:22

use that word in different contexts and examples.

play115:28

MARVIN MINSKY: While you're here,

play115:30

Janet Baker was a pioneer in speech recognition.

play115:35

How come the latest system suddenly got better?

play115:40

Are they just bigger databases?

play115:42

AUDIENCE: That's a lot of it.

play115:45

MARVIN MINSKY: Of course, the early ones

play115:48

you had to train for an hour.

play115:51

AUDIENCE: But we now have so many more examples

play115:54

and exemplars that you can much better characterize

play115:59

their ability, which is tremendous, between the people.

play116:03

And you typically have multiple models, a lot

play116:07

of different models of how--

play116:10

so it knows in a space of how people say different things

play116:16

and allowing you to characterize it really well,

play116:20

so it will do a much better job.

play116:24

You always do better if you have models of a given person

play116:27

speaking and modeling their voice.

play116:29

But you can now model a population much better when

play116:32

you have so much more data.

play116:36

MARVIN MINSKY: They're really getting useful.

play116:38

AUDIENCE: Oh, dear.

play116:47

MARVIN MINSKY: OK, unless somebody

play116:48

has a really urgent question.

play116:52

Thanks for coming.

Rate This

5.0 / 5 (0 votes)

Related Tags
意识问题情感与智能机器学习人工智能统计学习儿童发展语言学习心理学认知科学教育技术
Do you need a summary in English?