4. Question and Answer Session 1

MIT OpenCourseWare
4 Mar 2014105:58

Summary

TLDR在这段对话中,马文·明斯基教授与观众就多个话题进行了深入的讨论。话题涉及了神经科学、人工智能、认知心理学和语言学习等领域。明斯基教授分享了他对大脑功能、神经元连接以及胶质细胞可能的作用的看法。他还讨论了大脑如何处理不同的心理状态,例如否认、讨价还价、沮丧和抑郁,并探讨了动物(如狗)与人类在认知能力上的差异。此外,明斯基教授还对如何评估机器智能提出了见解,包括是否需要机器具备理解能力或仅仅模拟人类行为。整个讨论充满了对人类心智和机器学习的深刻洞察,展示了明斯基教授对这些复杂主题的深入理解。

Takeaways

  • 📚 支持MIT开放课程项目(MIT OpenCourseWare)可以继续免费提供高质量的教育资源。
  • 🧠 马文·明斯基(Marvin Minsky)讨论了大脑的工作原理,特别是关于神经胶质细胞可能的作用。
  • 🤔 明斯基表达了对大脑研究的担忧,认为神经科学界可能过于关注神经元间的化学物质,而忽视了神经网络结构的重要性。
  • 🐙 他提出了一个假设,即动物(如章鱼)可能具有复杂的规划能力,这可能与它们记住长序列和语义内容的能力有关。
  • 🐕 明斯基对狗的认知能力表示好奇,尤其是它们是否能够进行前瞻性规划和比较不同情境的能力。
  • 🤖 他探讨了人工智能的发展,强调了计算机科学在理解人类认知能力方面的作用。
  • 🧐 明斯基认为,即使是简单的计算机注册器数量差异,也可能在解决问题的速度上产生指数级的影响。
  • 🧬 他提到了关于人类和其他动物智能的比较,包括对狗的认知能力的看法,以及它们是否具有类似于人类的推理能力。
  • 📈 明斯基讨论了关于如何评估智能机器的问题,特别是当它们达到或超越人类智能水平时。
  • 🌐 他还提到了关于如何使用不同的模型来解决问题,以及人们如何在面对问题时选择思考方式的问题。
  • ❓ 最后,明斯基提出了关于如何识别和选择解决问题的最佳数据和方法的问题,这仍然是人工智能和认知心理学中一个未解决的核心问题。

Q & A

  • MIT开放课程的捐赠和额外材料如何获取?

    -要为MIT开放课程捐款或查看额外材料,可以访问ocw.mit.edu。

  • Marvin Minsky对于大脑工作方式的看法是什么?

    -Marvin Minsky不想对大脑如何工作进行推测,因为神经科学领域有大量研究论文,这些论文提出了不同的假设,包括可能是胶质细胞而非神经元在起作用。

  • 神经元的连接数量对它们处理信息的能力有何影响?

    -一个典型的神经元有大约10万个连接,这表明神经元内部必须进行非常复杂的处理工作,可能需要其他支持神经元的细胞参与。

  • 神经科学的历史是如何发展的?

    -神经科学的历史起初认为所有神经元都是相连的,直到1890年左右才有了清晰的概念,即神经细胞不是以连续网络的形式排列的,而是存在称为突触的小间隙。

  • Marvin Minsky对于大脑中化学物质的看法是什么?

    -Minsky认为,尽管有关化学物质在大脑激活时的作用有很多民间传说,但现代对神经系统的了解表明,神经途径的连接倾向于交替,不是总是,但经常是抑制性的或兴奋性的。

  • 癫痫发作时大脑发生了什么?

    -癫痫发作时,大脑的某些部分会因为足够的电和化学活动(主要是电活动)而开始同步发射,这种情况像森林火灾一样蔓延。

  • Marvin Minsky对于意识的看法是什么?

    -Minsky认为意识这个词太模糊,包含了很多不同的含义。他认为意识可能涉及多种不同的系统,不同的动物可能具有不同层次的意识。

  • 如何理解大脑的软件和硬件?

    -Minsky认为大脑的硬件是演化过程中形成的,而软件则是我们所学习的,包括大脑的一部分如何调节另一部分。他提出大脑可能存在多个层次的表示,类似于计算机中的寄存器。

  • Marvin Minsky对于人工智能的看法是什么?

    -Minsky认为人工智能的发展应该考虑人类解决问题的不同方式,而不仅仅是模仿人类大脑的工作方式。他还提到了计算机科学对于理解智能的重要性。

  • Marvin Minsky对于动物智能的看法是什么?

    -Minsky认为动物智能可能与人类智能有本质的不同,他提出了动物可能在某些方面(如记忆长序列)比人类更优秀的可能性。

  • Marvin Minsky对于神经科学未来的预测是什么?

    -Minsky预测,由于大脑扫描技术的成本降低和分辨率提高,以及新的化学示踪方法,神经科学在下一代将会非常激动人心。

  • Marvin Minsky对于心理学和认知科学的未来趋势有何看法?

    -Minsky认为可能会有更多的数学心理学出现,并且认知心理学可能会采用更多的计算复杂性理论来理解解决问题所需的过程。

Outlines

00:00

📚 支持MIT开放课程与神经科学的讨论

本段落介绍了MIT开放课程的资助信息,Marvin Minsky教授与观众的互动,以及对神经科学和大脑工作的讨论。Minsky教授谦虚地表示不愿对大脑工作方式进行推测,提到了神经元连接的复杂性,并探讨了神经胶质细胞可能的作用。他还回顾了神经科学的发展历程,包括对神经网络的早期认识,以及现代对神经递质和突触的研究。

05:02

🧠 大脑结构与记忆问题的解决

Minsky教授讨论了大脑皮层的结构,特别是哺乳动物的大脑皮层,并推测这些结构与思考和规划能力有关。他通过比较不同动物的大脑,包括人类和狗,来探讨认知和智能的层次。此外,还有对如何应对问题和情感反应阶段的讨论,如否认、讨价还价、沮丧和抑郁。

10:05

🌙 昼夜节律与大脑活动的切换

本段落探讨了动物如何根据不同活动激活或关闭大脑的某些部分,例如昼夜动物的节律变化如何影响它们的行为。还提到了内部时钟和外部光线对动物行为的影响,以及人类睡眠周期的多样性和可能的睡眠障碍。

15:05

🤔 社会心智理论的批评与神经科学的未来

Minsky教授讨论了社会心智理论所面临的批评,并对阿尔茨海默病的可能原因进行了探讨。他提到了大脑的软件和硬件方面,以及进化过程中可能出现的软件缺陷。此外,还对神经科学未来的发展,包括新的大脑扫描技术和合成化学物质的使用进行了预测。

20:05

🧐 意识、智能与动物的认知

Minsky教授和观众探讨了意识、智能以及动物是否具有与人类相似的认知能力。他们讨论了计算机模拟人类智能的潜力,以及大脑的神经网络和认知结构。此外,还提到了狗和人类智能的差异,以及如何通过观察动物行为来推断它们的认知能力。

25:06

🧬 进化、遗传变化与大脑发展

本段落讨论了遗传变化如何影响大脑的发展,以及进化过程中可能发生的变化。Minsky教授提到了胚胎发育的早期和近期阶段,以及这些变化如何影响后代的大脑结构和功能。他还讨论了大脑的进化,特别是人类大脑的前额皮质的显著发展。

30:06

🐕 狗的认知能力与人类智能的比较

Minsky教授和观众讨论了狗的认知能力,包括它们是否能够进行计划和预测未来的能力。他们还比较了狗和人类智能的差异,以及这些差异是否仅仅是计算能力的问题。此外,还探讨了不同动物的智能和语言能力,以及它们如何与人类智能相比较。

35:09

🚇 莫斯科的狗与智能行为

本段落讲述了莫斯科的流浪狗如何学会乘坐地铁,并采取不同的策略在市中心乞讨食物。Minsky教授分享了他的狗乘坐公共交通工具的故事,并讨论了动物学习和适应环境的能力。

40:11

🤖 人工智能的发展方向

Minsky教授和观众讨论了人工智能的未来发展方向,包括是否应该模仿人类的思维方式,以及如何评估智能机器的智能水平。他们探讨了机器学习和传统算法在解决特定问题时的应用,并讨论了如何通过不同的测试来评估机器智能。

45:12

🧵 知识表示与问题解决

本段落讨论了知识表示的不同层次,包括语义网络和中间层次的知识表示。Minsky教授和观众探讨了动物可能拥有的知识表示类型,以及这些表示如何使它们能够执行某些行为。他们还讨论了如何通过观察动物的行为来推断它们的知识表示能力。

50:14

📈 心理学研究与人工智能

Minsky教授和观众讨论了心理学研究的当前状态,特别是关于团队合作和问题解决的研究。他们还探讨了Piaget的发展心理学理论,以及这些理论如何影响对人工智能的理解。此外,还提到了人工智能研究的多样性和心理学研究的转变。

55:25

👶 婴儿期的智能发展

本段落讨论了婴儿期智能发展的重要性,以及如何通过模拟婴儿期来构建人工智能。Minsky教授提出了关于如何让机器学习的问题,包括是否需要为机器提供类似人类的本能,以及如何通过教育和经验来发展机器的知识表示。

00:28

🗣️ 语言学习与婴儿发展

Minsky教授和观众讨论了语言学习在婴儿发展中的作用,以及为什么婴儿在特定的发展阶段开始说话。他们探讨了语言的工程观点,以及如何通过编程来启动机器并构建经验知识。此外,还讨论了人类婴儿与机器学习之间的相似性和差异。

Mindmap

Keywords

💡神经科学

神经科学是研究神经系统的结构、功能、发展、遗传学、生物化学、生理学、药理学以及病理学的科学。在视频中,Marvin Minsky提到了神经科学社区对于大脑工作方式的探索,以及他们对于神经元连接和神经递质的研究。

💡人工智能

人工智能(AI)是指由人造系统所表现出来的智能。Minsky在视频中探讨了人工智能的发展,以及它与人类认知和大脑结构的关系。他提到了AI理论如何尝试模拟人类大脑的工作方式。

💡认知心理学

认知心理学是心理学的一个分支,它研究人类的认知过程,包括感知、思考、记忆、语言、解决问题等。视频中提到,Minsky认为现代认知心理学与AI有着紧密的联系,并且他询问了关于如何识别问题并选择解决策略的研究。

💡大脑皮层

大脑皮层是大脑的外层,负责处理高级功能,如思考和规划。Minsky在视频中提到,大脑皮层在哺乳动物中的发展,以及它在人类和其他动物中的区别,暗示了它在智能行为中的重要性。

💡神经可塑性

神经可塑性是指神经系统在生命过程中对经验和学习做出的结构和功能上的适应性变化。视频中提到了关于大脑连接和神经可塑性的问题,探讨了大脑是否能够形成新的连接。

💡突触

突触是神经元之间的连接点,通过这些连接点,神经元可以传递信号。Minsky讨论了突触间隙以及它们在神经信号传递中的作用。

💡心理模型

心理模型是指个体对世界的认知和理解方式。在视频中,Minsky提到了人们在解决问题时会使用不同的心理模型,并且这些模型会影响他们选择解决问题的方法。

💡语义网络

语义网络是一种表示知识的方式,通过节点和连接来表示概念及其之间的关系。视频中提到了语义网络作为AI系统中可能的表示方法,尽管当前的趋势可能更倾向于使用概率模型。

💡图灵测试

图灵测试是由艾伦·图灵提出的一个思想实验,旨在评估机器是否能够展现出与人类不可区分的智能行为。Minsky在视频中提到了图灵测试,并探讨了如何评估机器智能的问题。

💡进化心理学

进化心理学是一种心理学的分支,它探讨心理特征(如记忆、感知和语言)是如何受到自然选择和性选择的影响而进化的。视频中,Minsky提到了进化对大脑结构和功能的影响。

💡认知发展

认知发展是指个体从出生到成年期间认知能力的变化和发展。视频中提到了皮亚杰的发展阶段理论,以及它是如何影响我们对儿童如何学习和理解世界的理解。

Highlights

MIT OpenCourseWare 提供免费高质量教育资源,支持者的资金帮助其持续运营。

Marvin Minsky 讨论了大脑中胶质细胞可能与思维活动关联的假设。

Minsky 强调了神经元的复杂性,每个神经元可能有高达十万个连接。

他提出大脑功能可能不完全由神经元单独承担,其他支持细胞也可能参与。

Minsky 回顾了神经科学的历史,包括神经元不是连续网络的观点。

他讨论了大脑中化学物质如激素和肾上腺素的作用。

Minsky 描述了大脑中心之间的连接通常是交替的,一些抑制而另一些激发。

他用癫痫发作来类比大脑活动过激时的情况。

Minsky 倾向于在大脑皮层中寻找记忆和问题解决系统的结构。

他指出没有皮层的动物的行为可以通过低级的反应和反射来解释。

Minsky 探讨了人类如何处理失败和挫折,以及情绪反应的阶段性。

他讨论了动物和人类大脑的不同,特别是前额皮层的大小。

Minsky 提出可能有多个系统在大脑中并行工作,处理重要功能。

他提到了不同的精神疾病可能是由于大脑中不同的系统故障造成的。

Minsky 预测,由于新的扫描技术和合成化学物质,神经科学将在未来几年非常激动人心。

他批评了当前的人工智能理论,认为它们没有被证实能够代表哺乳动物大脑的工作方式。

Minsky 讨论了软件在智能中的作用,暗示大脑的学习和功能可能更多地依赖于“软件”而非“硬件”。

他提出了关于意识的讨论,认为狗可能具有与人类不同的意识形式。

Minsky 探讨了计算机科学如何影响我们对能力和智能的理解。

他提出了关于如何评估人工智能是否达到人类智能水平的问题。

Transcripts

play00:00

The following content is provided under a Creative

play00:02

Commons license.

play00:03

Your support will help MIT OpenCourseWare

play00:06

continue to offer high quality educational resources for free.

play00:10

To make a donation or to view additional materials

play00:12

from hundreds of MIT courses, visit MIT OpenCourseWare

play00:16

at ocw.mit.edu.

play00:22

MARVIN MINSKY: I presume everyone has an urgent question

play00:25

to ask.

play00:33

Maybe I'll have to point to someone.

play00:36

AUDIENCE: One over there.

play00:39

MARVIN MINSKY: Oh, good.

play00:40

AUDIENCE: So [INAUDIBLE] exactly what's said,

play00:43

but you said that maybe the [INAUDIBLE] lights are

play00:46

associated to the glial cells.

play00:48

Is that right?

play00:51

MARVIN MINSKY: Oh, I don't want to speculate on how

play00:54

the brain works, because--

play00:57

[LAUGHTER] because there's this huge community

play01:03

of neuroscientists who write papers about--

play01:08

they're very strange papers because they talk about how

play01:11

maybe it's not the neuron.

play01:13

And I've just downloaded a long paper

play01:17

by someone whose name I won't mention about the idea

play01:23

that a typical neuron has 100,000 connections.

play01:29

And so something awesomely important

play01:31

must go on inside the neuron's body.

play01:35

And it's got all these little fibers and things.

play01:38

And presumably, if it's dealing with 100,000 signals

play01:43

or something, then it must be very complicated.

play01:46

So maybe the neuron isn't smart enough to do that.

play01:50

So maybe the other cells nearby that support the neurons

play01:55

and feed them and send chemicals to and fro around there

play01:59

have something to do with it.

play02:01

How many of you have read such articles?

play02:06

It's a very strange community, because--

play02:11

I think the problem is that history of that science

play02:21

started first it was generally thought that all

play02:27

the neurons were connected.

play02:29

And then around 1890 was the first clear idea

play02:36

that nerve cells weren't arranged

play02:39

in a continuous network.

play02:42

I think it was generally believed that they were all

play02:45

connected to each other, because as far as you

play02:48

could tell with the microscopes of the time

play02:56

it didn't show enough.

play02:58

And then the hypothesis that the neurons are separate

play03:06

and there are little gaps, called synapses,

play03:10

as far as I can tell started around the 1890s.

play03:15

And from then on, as far as I can see,

play03:24

neurology and psychology became more and more separate.

play03:28

And the neurologists got obsessed with chemicals,

play03:34

hormones, epinephrine, and there are about a dozen chemicals

play03:41

involved that you can detect when parts of the brain

play03:46

are activated.

play03:48

And so a whole bunch of folklore grew up

play03:53

over about the roles of these chemicals.

play03:58

And one thought of some chemicals

play04:00

as inhibitory and excitatory.

play04:03

And that idea still spreads, although what we know about

play04:09

the nervous system now-- and I think I mentioned this before--

play04:13

is that in general if you trace a neural pathway

play04:16

from one part of the brain to another, what happens

play04:20

is that the connections tend to alternate, not always,

play04:23

but frequently.

play04:25

So that this connection might inhibit this neuron.

play04:28

And then you look at the output of that neuron,

play04:31

and that might tend to excite neurons

play04:35

in the next brain center.

play04:36

And then most of those cells would tend to inhibit.

play04:42

I mean, each brain center gets inputs from several others.

play04:47

And so it's not that a brain center

play04:49

is excitatory or inhibitory, but the connections

play04:53

from one brain center to another tend to have this effect.

play04:57

And that's probably necessary from a systems

play05:02

dynamic point of view, because if all neurons tended

play05:06

to either do nothing or excite the next brain center,

play05:10

then what would happen?

play05:13

Soon as you got a certain level of excitement,

play05:16

then more and more brain centers would get activated.

play05:20

And the whole thing would explode.

play05:21

And that's more or less what happens

play05:24

in an epileptic seizure, where if you

play05:28

get enough electrical and chemical activity of one kind

play05:32

or another, mostly electrical--

play05:34

I think, but I don't know--

play05:36

then whole large parts of the brain

play05:38

start to fire synchronicity.

play05:41

And the thing spreads very much like a forest fire.

play05:46

So that's a long rant.

play05:53

I guess I've repeated it several times.

play05:55

But it's hard to communicate with that community,

play05:59

because they really want to find the secret of thinking

play06:04

and knowledge in the brain cells,

play06:06

rather than in the architecture of the interconnections.

play06:11

So my inclination is to find an intermediate level, such as,

play06:19

at least in the cortex, which is what distinguishes the--

play06:27

does it start in mammals?

play06:29

AUDIENCE: I think so.

play06:32

MARVIN MINSKY: I think if--

play06:34

rather than a neurology book, I'm

play06:36

thinking of Carl Sagan's book, which there's

play06:40

is a sort of triune theory that's very popular,

play06:43

which is that the brain consists of three major divisions.

play06:48

And the-- I forget what the lowest level one is called,

play06:52

but the middle level is sort of the amphibian and then

play07:02

the mammalian and--

play07:05

it's in the mammalian development

play07:08

that large parts of the brain are cortexed.

play07:12

And the cortex isn't so much like a tangled neural net.

play07:19

But it's divided mainly into columns.

play07:23

And each column, these vertical columns,

play07:27

tend to have six or seven layers.

play07:29

I think six is the standard.

play07:33

And the whole thing is--

play07:36

what is it about 4 millimeters?

play07:38

4 or 5 millimeter thick, maybe a little more.

play07:46

And in each of these columns, there's

play07:50

major columns, which have about 1,000 neurons.

play07:53

And one of these columns is made up

play07:55

maybe 10 or 20 of these mini columns that are 50 or 100

play08:02

or whatever.

play08:04

And so my inclination is to suspect that since these

play08:09

are the animals that think and plan many steps ahead

play08:13

and do all the sorts of things we take for granted in humans,

play08:19

that we want to look there for the architecture of memory

play08:27

and problem-solving systems.

play08:30

In the animals without cortexes, you

play08:34

can account for most of their behavior

play08:36

in terms of fairly low-level, immediate stimulus response

play08:42

reflexes and large major states, like turning

play08:46

on some parts of some big blocks of these reflexes

play08:50

when it's hungry and turn on other blocks

play08:53

when there's an environmental threat and so forth

play08:58

or whatever.

play09:02

Anyway, I forget what--

play09:05

yes?

play09:06

AUDIENCE: So in Chapter 3 you talk

play09:09

about the stages we go do when we face something like your car

play09:15

breaks down and you can't go to work.

play09:18

That's the example given in the book.

play09:20

I'm wondering, how do we decide how we transition

play09:24

from one stage to another?

play09:26

And why do you go through the stages of denial, bargaining,

play09:35

like frustration, depression, and then

play09:38

like only the last stage seems productive?

play09:44

I guess, my main question is how do we

play09:46

decide that we should transition from stage to another

play09:50

from [INAUDIBLE]

play09:54

MARVIN MINSKY: That's a beautiful question.

play10:00

I think it's fairly well understood in the invertebrates

play10:04

that there are different centers in the brain

play10:08

for different activities.

play10:10

And I'm not sure how much is known

play10:16

about how these things switch.

play10:19

How does an animal decide whether it's time to--

play10:25

for example, most animals are either diurnal or nocturnal.

play10:29

So some stimulus comes along, like it's getting dark,

play10:35

and a nocturnal animal might then start waking up.

play10:39

And it turns on some part of the brain,

play10:42

and it turns off some other parts.

play10:45

And it starts to sneak around looking for food

play10:48

or whatever it does at night.

play10:50

Whereas a diurnal animal, when it starts to get dark,

play10:57

that might trigger some brain center to turn on,

play11:02

and it looks for its place to sleep and goes and hides.

play11:07

So some of these are due to external things.

play11:10

Then, of course, they're internal clocks.

play11:13

So for lots of animals, if you put it

play11:17

in a box that's dimly illuminated

play11:22

and it has a 24-hour cycle of some sort,

play11:26

it might persist in that cycle for quite a few days

play11:31

and go to sleep every 24 hours for half the time and so on.

play11:39

A friend of mine once decided he would see about this.

play11:44

And it's a famous AI theorist named Ray Solomonoff.

play11:52

And he put black paint on all his windows.

play11:59

And found that he had a 25 or 26-hour natural

play12:05

cycle, which was very nice.

play12:12

And this persisted for several months.

play12:17

I had another friend who lived in the New York subways,

play12:22

because his apartment was in a building that

play12:26

had an entrance to the subway.

play12:28

And he stayed out of daylight for six months.

play12:32

But anyway, he too found that he preferred

play12:39

to be on a 25 or 26-hour day than 24.

play12:45

I'm rambling.

play12:46

But we apparently have several different systems.

play12:52

So there's dead reckoning system,

play12:54

where some internal clocks are regulating your behavior.

play12:58

And then there are other systems where

play13:00

your people are very much affected by the amount of light

play13:05

and so forth.

play13:09

So we probably have four or five ways

play13:12

of doing almost everything that's important.

play13:15

And then people get various disorders

play13:19

where some of these systems fail.

play13:22

And a person doesn't have a regular sleep cycle.

play13:26

And there are disorders where people fall--

play13:32

what's it called when you fall asleep every few minutes?

play13:35

AUDIENCE: Narcolepsy.

play13:36

MARVIN MINSKY: Narcolepsy and all sorts

play13:40

of wonderful disorders just because the brain has evolved

play13:44

so many different ways of doing anything that's very important.

play13:55

Yeah?

play13:57

AUDIENCE: Can you describe the best piece of criticism

play14:00

for the society of mind theory?

play14:02

MARVIN MINSKY: Best piece of what?

play14:03

AUDIENCE: The best criticism.

play14:05

MARVIN MINSKY: Oh.

play14:13

It reminds me of the article I recent

play14:18

read about the possibility of a virus for--

play14:27

what's the disorder where--

play14:30

AUDIENCE: Alzheimer's.

play14:31

MARVIN MINSKY: No.

play14:32

The-- uh-- [LAUGHTER] actually, there

play14:37

isn't any generally accepted cause for Alzheimer's, as far

play14:42

as I know.

play14:45

What?

play14:45

AUDIENCE: Somebody just did an experiment

play14:47

where they injected Alzheimer infected matter into someone,

play14:50

and they got the same plaque.

play14:52

MARVIN MINSKY: Oh, well, right, I

play14:58

wonder if that's a popular theory.

play15:04

No, what's the one where people--

play15:07

AUDIENCE: Fibromyalgia.

play15:09

MARVIN MINSKY: Say it again.

play15:11

AUDIENCE: Fibromyalgia.

play15:12

MARVIN MINSKY: Yes, right.

play15:13

That's right, which is not recognized by most theorists

play15:18

to be a definite disease.

play15:22

But there's been an episode in which somebody--

play15:28

I forget what her name is--

play15:31

was pretty sure that she had found a virus for it.

play15:35

And every now and then somebody revives that theory

play15:41

and tries to get more evidence for it.

play15:46

Anyway, there must be disorders where the programming is bad,

play15:51

rather than a biochemical disorder, because whatever

play15:59

the brain is, the adult brain certainly

play16:05

has a very large component of what

play16:08

we would, in any other case, consider to be software.

play16:12

Namely lots of things that you've learned, including ways

play16:16

for one part of the brain to discover how to

play16:20

modulate or turn on or turn off other parts of the brain.

play16:25

And since we've only had this kind of cortex

play16:31

for 4 or 5 million years, it's probably

play16:34

still got lots of bugs.

play16:36

Evolution never knows what--

play16:40

when you make a new innovation, you

play16:42

don't know what's going to come after that that might find bugs

play16:48

and ways to get short-range advantages,

play16:51

short-term advantages at the expense

play16:54

of longer-term advantages.

play16:56

So lots of mental diseases might be software bugs.

play17:01

And a few of them are known to be

play17:03

connected to abnormal secretions of chemicals and so forth.

play17:12

But even in those cases, it's hard to be sure that

play17:16

the overproduction or underproduction

play17:19

of a neurologically important chemical is--

play17:26

what should I call it--

play17:29

a biological disorder or a functional disorder,

play17:33

because some part of the nervous system

play17:35

might have found some trick to cause abnormal secretions

play17:42

of some substance.

play17:47

That's the sort of thing that we can

play17:50

expect to learn a great deal more about

play17:53

in the next generation because of the lower cost and greater

play17:59

resolution of brain scanning techniques and--

play18:06

what's his name-- and new synthetic

play18:09

ways of putting in fluorescent chemicals into a normal brain

play18:14

without injuring it much, so that you can now

play18:19

do sort of macro chemical experiments of seeing what

play18:24

chemicals are being secreted in the brain

play18:29

with new kinds of scanning techniques.

play18:32

So neuroscience is going to be very exciting

play18:34

in the next generation with all the great new instruments.

play18:44

As you know, my complaint is that somehow introduction

play18:50

to the--

play18:52

I'm not saying any of the present AI theories have been

play18:55

confirmed to tell you that the brain works as such and such

play19:00

a rule-based system or such and such a--

play19:03

or use Winston-type representations

play19:07

or Roger Shank-type representations

play19:09

or scripts or frames or whatever.

play19:12

And the next to last chapter of the motion machine

play19:18

sort of summarizes I think almost a dozen different AI

play19:25

theories of ways to represent knowledge.

play19:29

Nobody has confirmed that any of those particular ideas

play19:35

represent what happens in a mammalian brain.

play19:41

And the problem to me is that the neuroscience community just

play19:49

doesn't read that stuff and doesn't design

play19:52

experiments to look for them.

play19:58

David has been moving from computer science and AI

play20:03

into that.

play20:05

So he's my current source of knowledge about

play20:08

what's happening there.

play20:12

Have any of you been following contemporary neuroscience?

play20:20

That's strange.

play20:22

Yeah?

play20:27

AUDIENCE: So you already talked about software a little bit.

play20:31

So I think they analyze Eisen brain.

play20:37

And I realize like that's why I talk about glial cells.

play20:40

And maybe he had a lot of more glial cells than normal humans.

play20:49

And so do believe that the intelligence of humans

play20:55

is like more of the software side or on the hardware side?

play21:00

Like we have computers that are very, very powerful, where

play21:03

we create software that we can run these machines

play21:08

that reproduce like humans.

play21:16

MARVIN MINSKY: I don't see any reason to doubt it.

play21:25

As far as we know computers can simulate anything.

play21:29

What they can't do yet, I suppose,

play21:33

is simulated large scale quantum phenomenon,

play21:37

because if you know the Feynman theory of quantum mechanics

play21:46

is that if you have a network of physical systems

play21:53

that are connected, then it's in the nature of physics

play21:58

that whatever happens from one state to another

play22:07

in the real universe, whatever happens actually

play22:12

happens by the wave function.

play22:17

The wave function represents the sum

play22:19

of the activities propagating through all possible paths.

play22:27

So in some sense that's too exponential

play22:32

to simulate on a computer.

play22:35

In other words, I believe the biggest supercomputers

play22:41

can simulate a helium atom today fairly well.

play22:46

But they can't simulate a lithium atom,

play22:50

because it's sort of four or five layers of exponentiation.

play22:55

So it would be 2 to the 2 to the 2 to the 2 and 4 to the 4

play23:00

to the 4 to the 4.

play23:02

[INAUDIBLE]

play23:06

But I suspect that the reason the brain works

play23:09

is that it's evolved to prevent quantum effects from making

play23:14

things complicated.

play23:17

The great thing about a neuron is that, generally speaking,

play23:22

a neuron fires all or none.

play23:24

And you get this point--

play23:27

you have to get a full half volt potentially

play23:33

between the neurons firing [INAUDIBLE] fluid.

play23:39

And a half a volt is a big [INAUDIBLE]..

play23:48

AUDIENCE: So you believe that the software that we have right

play23:52

now is equivalent to, for example,

play23:56

the intelligence that we have like in dogs

play23:59

or, for example, simple animals is like the difference that

play24:06

like-- do we just need to implement the software,

play24:10

like multiply the software?

play24:12

Or so how we need to create a whole software that--

play24:18

MARVIN MINSKY: No, there doesn't seem

play24:19

to be much difference in the architecture,

play24:23

in the local architecture of--

play24:25

AUDIENCE: Turn your microphone on.

play24:28

The one in your pocket.

play24:29

MARVIN MINSKY: Oh, did I turn it off again?

play24:30

AUDIENCE: Yes.

play24:32

MARVIN MINSKY: It's not green.

play24:34

AUDIENCE: Yeah, so throw the switch.

play24:36

Is it green now?

play24:37

MARVIN MINSKY: Now, it's green.

play24:42

The difference between the dog and the person

play24:44

is the huge frontal cortex.

play24:48

I think the rest of it is fairly similar.

play24:51

And I presume the hippocampus and amygdala and the structures

play24:56

that control which parts of the cortex

play24:58

are used for what are somewhat different.

play25:01

But the small details of the--

play25:05

all mammalian brains are practically the same.

play25:09

I mean, basically, you can't make an early genetic change

play25:14

in how neurons work where all the brain

play25:16

cells of the offspring would be somewhat different

play25:20

and the thing would be dead.

play25:22

So evolution has this property that generally there

play25:27

are only two places in the development of an embryo

play25:31

that evolution can operate.

play25:35

Namely in the pre-placental stage,

play25:39

you can change the way the egg breaks up and evolves.

play25:44

And you can have amazing things like identical twins

play25:47

happen without any effect on the nature of the adult offspring.

play25:53

Or you can change the things that happened most recently

play25:57

in evolution like little tweaks in how

play26:02

some part of the nervous system works,

play26:05

if it doesn't change earlier stages, what you--

play26:10

However, mutations that operate in the middle of all that

play26:13

and change in the number of segments in the embryo,

play26:17

I guess you could have a longer tail or a shorter tail.

play26:21

And that won't effect much.

play26:22

But if you change the 12 segments of the spine

play26:27

that the brain develops from, you'd

play26:29

get a huge alteration in how that animal will think.

play26:38

In other words, evolution cannot change intermediate structures

play26:44

very much or the animal won't live.

play26:50

Bob Lawler.

play26:50

AUDIENCE: If one thinks of comparing a person to a dog,

play26:55

would it not be most appropriate to think of those persons who

play26:59

were like the wild boy of southern France

play27:03

who grew up in the woods without any language

play27:07

and say that if you're going to look

play27:09

at individual's intelligence that

play27:12

would be a fair comparison with the dog.

play27:15

Whereas what we have when we think of people today

play27:19

is people who have learned so much through interaction

play27:22

with other people that the transmission of culture,

play27:26

is not essentially ways of thinking

play27:29

that have been learned throughout the history

play27:31

of civilization and some of us are able to pass on to others?

play27:36

MARVIN MINSKY: Oh, sure.

play27:37

Although if you expose a dog to humans,

play27:42

he doesn't learn language.

play27:44

So--

play27:45

AUDIENCE: He may or may not come if you call him.

play27:47

MARVIN MINSKY: Right.

play27:51

But presumably language is fairly recent.

play27:58

So you could have mutations in the structure of the language

play28:05

centers and still have a human that's alive.

play28:09

And it might be better at language than most other people

play28:12

or somewhat worse.

play28:14

So we could have lots of small mutations in anything

play28:18

that's been recently evolved.

play28:26

But the frontal cortex is--

play28:31

the human cortex is really very large

play28:34

compared to the rest of the brain.

play28:38

Same in dolphins and a couple of other animals, I forget,

play28:43

whales.

play28:45

yeah?

play28:45

AUDIENCE: So the reason why I ask

play28:47

that is that it seems to me that we have some quality,

play28:52

like some kind of--

play28:54

we can see the world--

play28:56

like add some qualities to the world.

play28:59

And like this is what I would call consciousness.

play29:03

And like for me, it seems that dogs also

play29:07

have this quality of like seeing the world

play29:11

and like adding qualities to the world, so like maybe,

play29:16

this is good, this is bad.

play29:18

Like there are different qualities for different beings.

play29:22

And like the software that we produce right now

play29:26

seems to be maybe faster and like maybe do more tests

play29:31

than what maybe a dog does.

play29:35

But for me, it doesn't seem that it has essential display

play29:42

quality--

play29:44

I think like it doesn't have consciousness

play29:48

in the sense it doesn't like abrogate quality to the things

play29:53

in the world maybe.

play29:55

MARVIN MINSKY: Well, I think I know what you're getting at.

play30:02

But you're using that word consciousness,

play30:05

which I've decided to abandon, because it's

play30:12

36 different things.

play30:14

And probably a dog has 5 or 6 of them or 31.

play30:20

I don't know.

play30:22

But one question is, do you think

play30:33

a dog can think several steps ahead and consider

play30:39

two alternative--

play30:46

that's funny.

play30:52

Oh, let's make this abstract.

play30:54

So here's a world.

play30:56

And the dog is here.

play30:58

And it wants to get here.

play31:00

And there are all sorts of obstacles in it.

play31:03

So can the dog say, well, if I went

play31:06

this way I'd have such and such difficulty, whereas if I

play31:11

went this way, I'd have this difficulty.

play31:15

Well, I think this one looks better.

play31:19

Do you think your dog considers two or three alternatives

play31:22

and makes plans?

play31:25

I have no idea.

play31:26

But the curious thing about a person

play31:31

is you can decide that you're going

play31:34

to not act in the situation until you've

play31:40

considered 16 plans.

play31:43

And then one part of your brain is

play31:45

making these different approaches to the problem.

play31:50

And another part of your brain is saying,

play31:52

well, now, I've made five plans, and I'm beginning

play31:56

to forget the first one.

play31:57

So I better reformulate it.

play32:01

And you're doing all of these self-conscious in the sense

play32:07

that you're making plans that involve predicting what

play32:14

decisions you will make.

play32:17

And instead of making them, you make the decision

play32:20

to say I'm going to follow out these two plans

play32:23

and use the result of that to decide which one to.

play32:29

Do you think a dog does any of that?

play32:31

Does it look around and say, well,

play32:33

I could go that way or this way?

play32:35

Hmm.

play32:39

I remember our dog was good at if you'd throw

play32:44

a ball it would go and get it.

play32:46

And if you threw two balls it would go and get both of them.

play32:51

And sometimes if you threw three balls,

play32:53

it would go and get them all.

play32:57

And sometimes if a ball would roll under a couch

play33:00

that it couldn't reach, it would get the other two,

play33:04

and it would think.

play33:05

And then it would run back to the kitchen

play33:07

where that ball is usually found.

play33:10

And then it would come back disappointed.

play33:12

So what does that mean?

play33:15

Did it have parallel plans?

play33:17

Or does it make a new one when the previous one fails?

play33:23

And they're not actually parallel.

play33:29

What's your guess?

play33:31

How far ahead does a dog think?

play33:33

Do you have a dog?

play33:34

AUDIENCE: Yeah.

play33:35

I do have a dog.

play33:38

But I don't believe that's the essential part of beings

play33:43

that have some kind of advanced brain.

play33:47

Like we can plan ahead.

play33:49

Humans can plan ahead.

play33:51

But I don't think they are the fundamental part

play34:00

of intelligence.

play34:02

Like humans, I think Winston says

play34:06

that humans are better than the primates

play34:09

in like they can understand stories

play34:12

and they can join together stories.

play34:15

But somehow I don't buy the story that primates are just

play34:26

like rule planners.

play34:29

I think somehow we have some quality meshing of the world

play34:37

and like somehow we're not writing a software.

play34:43

MARVIN MINSKY: But, you know, it's funny.

play34:44

Computer science teaches us things

play34:47

that weren't obvious before.

play34:51

Like it might turn out that if you're a computer

play34:55

and you only have two registers, then--

play35:00

well, in principle, you could do anything,

play35:01

but that's another matter.

play35:04

But it might turn out that maybe a dog has only two registers

play35:09

and a person has four.

play35:11

And a trivial thing like that makes

play35:14

it possible to have two plans and put them in suspense

play35:19

and think about the strategy and come back and change one.

play35:23

Whereas if you only had two registers,

play35:28

your mind would be much lower order.

play35:30

And there's no big difference.

play35:32

So computer science tells us that the usual way of thinking

play35:39

about abilities might be wrong.

play35:44

Before computer science, people didn't really

play35:47

have that kind of idea.

play35:52

Many years ago, I was in a contest--

play35:54

I mean, you know, a science, because some of our friends

play36:02

showed that you could make a universal computer with four

play36:04

registers.

play36:06

And I had discovered some other things,

play36:11

and I managed to show that you could make a universal computer

play36:17

with just two registers.

play36:18

And that was a big surprise to a lot of people.

play36:27

But there never was anything in the history

play36:32

of psychology of that nature.

play36:36

So there never were really technical theories of--

play36:45

it's really computational complexity.

play36:48

What does it take to solve certain kinds of problems?

play36:51

And until the 1960s, there weren't any theories of that.

play36:56

And I'm not sure that that aspect of computer sciences

play37:03

actually reach many psychologists

play37:05

or neuroscientists.

play37:08

I'm not even sure that it's relevant.

play37:10

But it's really interesting that the difference

play37:13

between 2 and 3 registers could make an exponential difference

play37:18

in how fast you could solve certain kinds of problems

play37:22

and not others.

play37:26

So maybe there'll be a little more mathematical psychology

play37:30

in the next couple of decades.

play37:34

Yeah.

play37:37

AUDIENCE: So in artificial intelligence,

play37:40

how much of our effort should be devoted

play37:44

to a kind of reflecting on our thinking as humans

play37:48

and trying to figure out what's really going on

play37:50

inside our brains and trying to kind of implement

play37:53

that versus observing and identifying

play37:57

what kinds of problem we, as humans, can solve and then come

play38:01

up with an intuitive way for a computer

play38:03

to kind of in a human-like way solve these problems?

play38:07

MARVIN MINSKY: They're a lot of nice questions.

play38:09

I don't think it doesn't make any sense

play38:13

to suggest that we think about what's happening in our brains,

play38:18

because that takes scientific instruments.

play38:21

But it certainly makes sense to go over

play38:28

older theories of psychology and ask

play38:38

to solve a certain kind of problem,

play38:39

what kind of procedures are absolutely necessary?

play38:43

And you could find some things like that, like how

play38:47

many registers would you need and what kinds of conditionals

play38:51

and what kind of addressing.

play39:00

So I think a lot of cognitive psychology, modern cognitive

play39:03

psychology, is of that character.

play39:07

But I don't see any way to introspect well enough

play39:17

to guess how your brain does something,

play39:20

because we're just not that conscious.

play39:25

You don't have access to--

play39:28

you could think for 10 years about how

play39:30

do I think of the next word to speak, and unlikely

play39:36

that you would--

play39:37

you might get some new ideas about how

play39:39

this might have happened, but you couldn't be sure.

play39:47

Well, I take it back.

play39:56

You can probably get some correct theories

play39:59

by being lucky and clever.

play40:02

And then you'd have to find a neuroscientist

play40:04

to design an experiment to see if there's

play40:08

any evidence for that.

play40:10

In particular, I'd like to convince

play40:12

some neurologists to consider the idea of k-lines.

play40:18

It's described I think in both of my books.

play40:23

And think of experiments to see if you could get them to light

play40:29

up or otherwise localize in--

play40:35

once you have in your mind the idea that maybe the way one

play40:40

brain connects--

play40:43

sends information to another is over something like k-lines,

play40:47

which I think I talked about that the other day--

play40:52

random superimposed coding on parallel wires,

play40:57

then maybe you could think of experiments

play40:59

that even present brain scanning techniques

play41:03

could use to localize these.

play41:08

My main concern is that the way they do brain scanning now

play41:14

is to set thresholds to see which brain centers light up

play41:20

and which turn off.

play41:23

And then they say, oh, I see this activity looks

play41:26

like it happens in the lateral hippocampus

play41:30

because you see that light up.

play41:32

I think that there should be at least a couple

play41:37

of neuroscientist groups who do the opposite, which

play41:42

is to reduce the contrast.

play41:45

And when there are several brain centers that

play41:48

seem to be involved in an activity,

play41:51

then say something to the patient and look for one area

play41:56

to get 2% dimmer and another to look 4% brighter

play42:02

and say that might mean that there's

play42:04

a k-line going from this one to that one

play42:07

with an inhibitory effect on this or that.

play42:12

But as far as I know right now, every paper

play42:15

I've ever seen published showing brain centers lighting up

play42:19

has high contrast.

play42:21

And so they're missing all the small things.

play42:24

And maybe they're only seeing the end result of the process

play42:29

where a little thinking has gone on with all these intricate

play42:35

low intensity interactions, and then the thing

play42:38

decides, oh, OK, I'm going to do this.

play42:41

And you conclude that that brain center which lit up

play42:45

is the one that decided to do this,

play42:48

whereas it's the result of a very small, fast avalanche.

play42:57

AUDIENCE: Have you seen the one a couple of weeks

play42:59

ago about reading out the visual in real time?

play43:03

MARVIN MINSKY: From the visual cortex?

play43:04

AUDIENCE: Yes.

play43:06

Quite a nice half, they aren't actually

play43:07

reading out the visual field.

play43:09

For each subject, they do a massive amount of training

play43:12

where they flash thousands of 1-second video clips

play43:17

and assemble a database of very small perturbations

play43:21

in different parts of the visual cortex lighting up.

play43:24

And they show a novel video to each of the subjects

play43:28

and basically just do a linear combination

play43:32

of all of the videos that they have

play43:35

done in the training phase weighted by how closely things

play43:39

line up in the brain.

play43:41

And you can sort of see what's going on.

play43:44

It's quite striking.

play43:46

MARVIN MINSKY: Can you tell what they're thinking?

play43:48

AUDIENCE: You can only tell what they're seeing.

play43:51

But I think--

play43:51

MARVIN MINSKY: You know, if your eyes are closed,

play43:53

your primary visual cortex probably doesn't do anything,

play43:56

does it?

play43:57

AUDIENCE: I think it's just--

play43:58

yeah.

play44:00

MARVIN MINSKY: But the secondary one

play44:02

might be representing things that might be.

play44:06

AUDIENCE: Yes.

play44:07

So the goal of the authors of this paper

play44:10

is eventually to literally make movies out of dreams.

play44:16

But that's a long way off.

play44:19

MARVIN MINSKY: It's an old idea in science fiction.

play44:26

How many of you read science fiction?

play44:31

Wow, that's a majority.

play44:38

Who's the best new writer?

play44:41

AUDIENCE: Neal Stephenson.

play44:44

MARVIN MINSKY: He's been writing a long time.

play44:47

AUDIENCE: He's new compared to Heinlein.

play44:50

[LAUGHTER]

play45:00

MARVIN MINSKY: I had dinner with Stephenson

play45:01

at the Hillis's a couple of years ago.

play45:12

Yeah?

play45:14

AUDIENCE: So from what I understood,

play45:16

it seems that you're saying that the difference between us

play45:21

and like, for example, dogs is just a computational power.

play45:28

So do you believe that the difference

play45:32

between dogs and computers is also just computational?

play45:37

Like what's the difference between dogs and like Turing

play45:43

machine?

play45:46

Or there is no difference?

play45:50

MARVIN MINSKY: It might be that only humans and maybe

play45:55

some of their closest relatives can imagine a sequence.

play46:04

In other words, the simplest and oldest theories in psychology

play46:09

were the theories like David Hume had

play46:14

the idea of association, one idea in the mind or brain

play46:23

causes another idea to appear in another.

play46:26

So that means that a brain that's learned associations

play46:32

or learn if/then rule-based systems

play46:36

can make chains of things.

play46:38

But the question is, can any animal, other than humans,

play46:45

imagine two different situations and then compare them and say,

play46:50

if I did this and then that, how would the result differ

play46:55

from doing that and then this?

play46:57

If you look at Gerry Sussman's thesis--

play47:01

if you're at MIT, a good thing to do

play47:07

and you're taking your course, you

play47:09

should read the PhD thesis of your professor.

play47:16

It not only will help you understand

play47:18

better what the professor said, you'll

play47:22

get a higher grade, if you care, and many other advantages.

play47:28

Like you'll actually be able to talk to him

play47:30

and his mind won't throw up.

play47:40

So, you know, I don't know if a dog can recapitulate as--

play47:47

can the dog think, I think I'll go around this fence

play47:52

and when I get to this tree I'll do this, I'll pee on it--

play47:56

that's what dogs do--

play48:01

whereas if I go this way something else will happen?

play48:05

It might be that you that pre-primates

play48:12

can't do much of that.

play48:14

On the other hand, if you ask, what is the song of the whale?

play48:23

What's the whale that has this 20-minute song?

play48:32

My conjecture is that a whale has

play48:36

to swim 1,000 miles or several hundred miles sometimes

play48:41

to get the food it wants because things change.

play48:46

And each group of whales--

play48:51

humpback whales, I guess, sing this song

play48:54

that's about 20 minutes long.

play48:57

And nobody has made a good conjecture

play49:01

about what the content of that song,

play49:06

but it's shared among the animals.

play49:08

And they can hear it 20 or 50 miles away and repeat it.

play49:14

And it changes every season.

play49:16

So I suspect that the obvious thing that it should be about

play49:20

is where's the food these days, where

play49:23

are the best flocks of fish to eat,

play49:26

because a whale can't afford to swim 200 miles to the place

play49:33

where its favorite fish were last year and find it empty.

play49:39

It takes a lot of energy to cross the ocean.

play49:46

So maybe those animals have the ability

play49:51

to remember very long sequences and even

play49:54

some semantics connected with it.

play49:57

I don't know if dogs have anything like that.

play50:01

Do dogs ever seem to be talking to each other?

play50:05

Or they just--

play50:06

AUDIENCE: I have a story dogs.

play50:09

So apparently in Moscow, not all dogs,

play50:13

but a very small fraction of the stray dogs in the city

play50:17

have learned how to ride the metro.

play50:20

They live out in the suburbs because I

play50:22

guess people give them less trouble when they're out

play50:24

in the suburbs.

play50:25

And then they take the subway each day

play50:27

into the city center where there are more people.

play50:30

And they have various strategies for begging in the city center.

play50:33

So for instance, they find some guy with a sandwich,

play50:36

and they bark really loudly behind the guy,

play50:39

and the guy would drop the sandwich.

play50:40

And then they would steal it.

play50:42

Or they have a pack of them, and they all know each other.

play50:45

And they send out a really cute one to beg for food,

play50:49

and so they'll give the cute one food.

play50:51

And the cute one brings it back to everyone else.

play50:53

And simply navigating the subway is actually a bit complicated

play50:57

for a dog, but somehow a very small group of a dogs in Moscow

play51:02

have learned how to do it, like figure out where their stop is,

play51:05

get on, get off.

play51:08

MARVIN MINSKY: Yeah, our dog once hopped on the Green Line

play51:11

and got off at Park Street.

play51:14

So she was missing for a while.

play51:16

And somebody at Park Street called up

play51:19

and said your dog is here.

play51:22

So I went down and got her.

play51:25

And the agent said, you know, we had

play51:30

a dog that came to Park Street every day and changed trains

play51:37

and took the Red Line to somewhere.

play51:41

And finally, we found out that its master had--

play51:47

it used to go to work with its owner every day, and he died.

play51:53

And the dog took the same trip every day and.

play51:58

The T people understood that he shouldn't be bothered with.

play52:06

Our dog chased cars.

play52:08

Was it Jenny?

play52:10

And that was terrible because we knew she was going to get hurt.

play52:15

And finally, a car squashed her leg,

play52:20

and she was laid up for a while with a somewhat broken leg.

play52:25

And I thought, well, she won't chase cars anymore.

play52:29

But she did.

play52:30

But what she wouldn't do is go to the intersection of Carlton

play52:36

and Ivy Street anymore, which is--

play52:40

so she had learned something.

play52:45

But it wasn't the right thing.

play52:57

I'm not sure I answered your--

play53:05

AUDIENCE: Actually, according to--

play53:08

there's this story that you gave in Chapter 2

play53:10

about the girl who was digging dirt.

play53:15

So in the case where she learns whether in digging dirt

play53:21

is a good or bad activity is when

play53:23

there is somebody with whom she had an attachment bond

play53:27

present who's telling her whether it's good or bad.

play53:30

And in the case where she learned to avoid

play53:32

that fight is when something bad happens to her in the spot.

play53:35

So in a sense, the dog is behaving just like that logic.

play53:40

MARVIN MINSKY: Yes.

play53:42

Except that the dog is oriented toward location rather than

play53:47

something else.

play53:49

So--

play53:56

AUDIENCE: Professor, can you talk about possible hierarchy

play54:01

or representations schemes of knowledge,

play54:05

like semantic is on top.

play54:10

And at the bottom, there's like--

play54:13

you're mentioning in the middle of k-lines

play54:14

they were on the bottom.

play54:15

There's things up there.

play54:17

So the way I thought about the present therapist

play54:19

asked that humans--

play54:21

it's just natural that you need all

play54:23

of the immediate representation in order to support something

play54:28

like semantic nets.

play54:29

And it seems natural to me to think

play54:31

that humans have all these double hierarchy

play54:34

of representations, but dogs might

play54:36

have something only in the middle,

play54:39

like they only have something like neuronets or something.

play54:44

So my question is, what behaviors

play54:47

that you could observe in real life could only

play54:52

be done with one of these intermediate representations

play54:55

of knowledge that can't be done with something like machine

play54:59

learning?

play55:00

MARVIN MINSKY: Hmm, you mean machine learning

play55:06

of some particular kind?

play55:09

AUDIENCE: That's currently fashionable I think.

play55:12

Kind of like with brute force of calibration of some parameter.

play55:25

It seems to me that if you recognize a behavior like that,

play55:29

it might be a worthy intermediate goal

play55:31

to be able to model that instead of trying to model something

play55:34

like natural language, which is you

play55:36

might need the first part to get the second part.

play55:40

MARVIN MINSKY: Well, it would be nice to know--

play55:46

I wonder how much is known about elephants, which

play55:49

are awfully smart compared to--

play56:01

I suspect that they are very good at making plans,

play56:04

because it's so easy for an elephant

play56:07

to make a fatal mistake.

play56:11

So unfortunately, probably no research group

play56:17

has enough budget to study that kind of animal,

play56:23

because it's just too expensive.

play56:27

How smart are elephants?

play56:29

Anybody-- I've never interacted with one.

play56:39

I'm not sure if you have a question.

play56:43

AUDIENCE: I think the question is are there behaviors

play56:47

that you need an intermediate level of the repetition

play56:52

of knowledge in order to perform that you

play56:55

don't need like the highest level like semantic--

play56:58

like basically natural language to do.

play57:00

So you could say that by some animal doing this behavior,

play57:03

I know that it has some intermediate level

play57:05

of representation of knowledge that's

play57:08

more than kind of a brute force machine learning approach.

play57:11

Because like what's discussed before,

play57:13

a computer can do path finding, which

play57:16

is like a brute force approach.

play57:17

I don't think that's how humans do it or animals do it.

play57:22

MARVIN MINSKY: I can't think of a good--

play57:28

it's just hard to think of any animals besides us that

play57:32

have really elaborate semantic networks.

play57:40

There's Koko, who is a gorilla that apparently

play57:46

had hundreds of words.

play57:49

But--

play57:52

AUDIENCE: I think the question is

play57:53

to find something that's lower than words, like maybe Betty

play57:57

the crow--

play57:59

MARVIN MINSKY: With that stick, yeah.

play58:05

How many of you seen the crow movie?

play58:12

She has a wire that she bends and pulls something out

play58:15

of a tube.

play58:19

But--

play58:23

AUDIENCE: I don't think machine learning can do that.

play58:26

But I don't think you need semantic nets either.

play58:28

MARVIN MINSKY: I have a parrot who lives

play58:30

in a three-dimensional cage.

play58:32

And she knows how to get from any place to another.

play58:37

And if she's in a hurry, she'll find a new way

play58:42

at the risk of injuring a wing, because there

play58:45

are a lot of sticks in the way.

play58:47

So flying is risky.

play58:56

Our daughter, Julie, once visited Koko, the gorilla.

play59:01

And she was introduced--

play59:05

Koko's in a cage.

play59:08

And Penny, who is Koko's owner, introduces

play59:17

Julie in sign language.

play59:22

It's not spoken.

play59:24

It's sign language.

play59:28

So Julie gets some name.

play59:31

And she's introduced to Koko.

play59:35

And Koko likes Julie.

play59:38

So Koko says, let me out.

play59:42

And Penny says, no, you can't get out.

play59:47

And Koko says, then let Julie in.

play59:52

And I thought that showed some fairly abstract reasoning

play59:58

or representation.

play60:01

And Penny didn't let Julie in.

play60:09

But Koko seemed to have a fair amount of declarative syntax.

play60:24

I don't know if she could do passives or anything like that.

play60:28

If you're interested, you probably

play60:30

can look it up on the web.

play60:34

Penny's owner-- I mean Penny thought that Koko

play60:38

knew 600 or 700 words.

play60:41

And a friend of ours was a teenager who worked for her.

play60:45

And what's his name?

play60:50

And he was convinced that Koko knew more than 1,000 words.

play60:57

But he said, you see, I'm a teenager

play60:59

and I'm still good at picking up gestures and clues better

play61:04

than the adults here.

play61:07

But anyway I gather Koko is still there.

play61:12

And I don't know if she's still learning more words.

play61:17

But every now and then we get a letter asking

play61:20

to send more money.

play61:26

Oh, in the last lecture, I couldn't

play61:34

think of the right crypto arithmetic example.

play61:57

I think that's the one that the Newell Simon book starts out

play62:00

with.

play62:01

So obviously, m is 1.

play62:04

And then I bet some of you could figure that out

play62:11

in 4 or 5 minutes.

play62:15

Anybody figured it out yet?

play62:31

Help.

play62:36

Send more questions.

play62:41

Yeah?

play62:42

AUDIENCE: I have an example.

play62:44

For instance, I go out to a restaurant

play62:46

of this type of exotic food that I've never ever had before.

play62:51

And I end up getting sick from it.

play62:53

So what determines what I learned from this?

play62:58

Because there are many different possibilities.

play63:00

There is the one possibility of I

play63:02

learned to avoid the specific food I ate.

play63:05

Another possibility is like I learn

play63:07

to avoid that type of food, because it

play63:09

might contain some sort of spice that I react to badly.

play63:12

And a third possibility-- there might be more--

play63:15

I learn to avoid that restaurant,

play63:16

because it just might be a bad restaurant.

play63:19

So in this case, it's not entirely clear

play63:22

which one to pick.

play63:25

And, of course, in real life, I might

play63:27

go there again and comparatively try another food

play63:30

or try the same food at a different restaurant.

play63:32

But what do you think about this on that scenario, what causes

play63:35

people to pick which one?

play63:37

MARVIN MINSKY: The trouble is we keep thinking of ourselves

play63:40

as people.

play63:41

And what you really should think of yourself

play63:46

as a sort of Petri dish with a trillion bacteria in it.

play63:53

And it's really not important to you

play63:56

what you eat, but your intestinal bacteria are

play64:00

the ones who are really going to suffer,

play64:02

because they're not used to anything new.

play64:06

So I don't know what conclusion to draw from that.

play64:11

But--

play64:15

AUDIENCE: Previously, you mentioned

play64:17

that David Hume thought that knowledge

play64:20

represented as associations.

play64:23

And that occurs to me as being some sort of like a Wiki

play64:26

structure where entries have tags.

play64:29

So an entry might be defined by what tags it has

play64:32

and what associations it has.

play64:34

I'm wondering if that structure has

play64:36

been-- if somebody has attempted to code that

play64:39

into some kind of peripheral structure,

play64:41

has there been any success with putting

play64:44

that idea into a potential AI.

play64:54

MARVIN MINSKY: I don't know how to answer that.

play65:08

Do any psychologists use semantic networks

play65:12

as representations?

play65:13

Pat, do you know, has anybody--

play65:19

is anyone building an AI system with semantic representations

play65:26

or semantic networks anymore?

play65:29

Or is it all--

play65:30

everything I've seen is gone probabilistic

play65:33

in the last few years.

play65:37

Your project.

play65:38

Do you have any competitors?

play65:40

AUDIENCE: No.

play65:41

MARVIN MINSKY: Any idea what the IBM people are using?

play65:49

I saw a long article that I didn't read, yet but--

play65:53

AUDIENCE: Traditional information retrieval

play65:56

plus 100 hacks plus machine learning.

play66:01

MARVIN MINSKY: They seem to have a whole lot

play66:02

of slightly different representations

play66:06

that they switch among.

play66:10

AUDIENCE: But none of them are very semantic.

play66:14

AUDIENCE: Well, they probably have--

play66:16

I don't know, does anybody know what the answer is?

play66:18

But they must have a little frame-like things

play66:21

for the standard questions.

play66:23

MARVIN MINSKY: Of course, the thing doesn't answer any--

play66:27

it doesn't do any reasoning as far as you can tell.

play66:29

AUDIENCE: Right.

play66:30

MARVIN MINSKY: So it's trying to match sentences

play66:34

in the database with the question.

play66:45

Well, what's your theory of why there

play66:48

aren't other groups working on what we used to and you are?

play66:54

AUDIENCE: Well, multiples are computing is a fad.

play66:58

And if you can do better in less time that way

play67:03

than figuring it out how it really works,

play67:05

then that's what you do.

play67:10

No one does research on chess, no one

play67:14

does a research on how humans might play chess,

play67:17

because the bulldozer computers have won.

play67:20

MARVIN MINSKY: Right.

play67:24

There were some articles on chess and checkers

play67:27

early in the game.

play67:28

But nothing recent as far as I know.

play67:35

AUDIENCE: So in many ways it's a local maximum phenomenon.

play67:40

So bulldozer computing stuff has got up

play67:42

to a certain local maximum.

play67:45

Until you can do better than that some other way, then

play67:48

[INAUDIBLE]

play67:50

MARVIN MINSKY: Well, I wonder if we

play67:51

could invent a new TV show where the questions are interesting.

play68:03

Like I'm obsessed with the question

play68:06

of why you can pull something with a string,

play68:09

but you can't push it.

play68:11

And, in fact, what was this-- we had

play68:16

a student who actually did something with that a long time

play68:19

ago.

play68:19

But I've lost track of him.

play68:23

But how could you make a TV show that

play68:28

had common sense questions rather than ones about sports

play68:33

and actors?

play68:39

AUDIENCE: Well, you don't you imagine what

play68:41

happens when you push a string?

play68:43

It's hard to explain the--

play68:44

MARVIN MINSKY: It buckles.

play68:45

AUDIENCE: It's easy to imagine.

play68:46

MARVIN MINSKY: Yeah, So you can simulate it.

play68:48

AUDIENCE: Yeah.

play68:54

MARVIN MINSKY: Yeah.

play68:56

AUDIENCE: I have a question.

play68:58

So suppose in the future we can create

play69:00

a robot as intelligent as human as smart,

play69:03

and how we should evaluate it?

play69:04

When do we know that we reach like certain things like which

play69:08

test should pass or which [INAUDIBLE] should [INAUDIBLE]??

play69:12

So for example, [INAUDIBLE] asked some pretty hard

play69:15

questions and seem to be intelligent.

play69:18

But what all it is doing is doing some other attempts

play69:21

and then calculating some probability and stuff.

play69:23

Humans don't do that.

play69:24

They try to understand the question and look to answer it.

play69:28

But then suppose you can create a robot that

play69:30

can behave as it is like--

play69:32

I don't know, how would you evaluate

play69:34

when do you know that you reach something?

play69:40

MARVIN MINSKY: That's sort of funny,

play69:41

because if it's any good, you wouldn't have that question.

play69:56

You'd say, well, what can't it do?

play69:59

And why not?

play70:00

And you'd argue with it.

play70:06

In other words, people talk about passing the Turing test,

play70:11

or whatever.

play70:12

And it's hard to imagine a machine that you converse

play70:23

with for a while and then when you're told it's a machine,

play70:30

you're surprised.

play70:43

AUDIENCE: So I think, for example,

play70:45

you can make a machine to say some very intelligent and smart

play70:48

things, because like it may know,

play70:50

it takes all this information from different books

play70:52

and all this information that it has somewhere in a database,

play70:55

right.

play70:55

But then like when people speak they kind of dissent

play70:58

when you're speaking.

play70:59

How do you know like some robot understands something

play71:02

or doesn't understand?

play71:03

Or does it have to understand at all?

play71:05

MARVIN MINSKY: Well, I would ask it questions like why can't you

play71:09

push something with a string?

play71:18

Anyone have a Google working?

play71:22

What does Google say if you ask it that?

play71:25

Maybe it'll quote me.

play71:30

Or someone-- yeah?

play71:32

AUDIENCE: How would you answer that question, like why

play71:35

can pull, but not break?

play71:45

MARVIN MINSKY: I'd say, well, it would buckle.

play71:47

And then they would say, what do you mean by buckle?

play71:50

And then I'd say, oh, it would fold up

play71:54

so that it got shorter without exerting any force at the end.

play71:58

Or blah, blah.

play72:00

I don't know.

play72:01

There are lots of answers.

play72:03

How would you answer it?

play72:08

A physicist might say, if you've got it really very, very,

play72:13

very straight, you could push it with a string.

play72:22

But quantum mechanics would say you can't.

play72:24

Yeah.

play72:26

AUDIENCE: I feel like if you--

play72:28

like the [INAUDIBLE] or like an interesting show

play72:32

would be like an alternate cooking

play72:34

show or something where you have to use

play72:37

object that's like not normally found to have that use.

play72:42

So like I want to paint a room, but you're not given a brush.

play72:45

You're given like a sponge.

play72:50

Or people pull up like eggplants want it painted purple.

play72:55

So it has to represent the thing in a different way other than--

play73:01

MARVIN MINSKY: Words.

play73:05

That's interesting.

play73:08

When I was in graduate school, I took a course in knot theory.

play73:12

And, in fact, you couldn't talk about them.

play73:14

And if anybody had a question, they'd

play73:17

have to run up to the board.

play73:18

And, you know, they'd have to do something like this.

play73:32

Is that a knot?

play73:36

No.

play73:39

No, that's just a loop.

play73:45

But if you were restricted to words,

play73:50

it would take a half hour to--

play73:55

that's interesting.

play74:00

Yeah?

play74:03

AUDIENCE: You mentioned solving the strange puzzle by imagining

play74:06

the result. And I think heard someone else say,

play74:08

computers can do that in some way.

play74:09

It can simulate a string.

play74:10

And we know enough physics that you

play74:12

can give a reasonable approximation of string.

play74:15

But I find that the question that is often not asked in AI

play74:20

is--

play74:21

or by computers-- is how does one choose the correct model

play74:25

with which to answer questions?

play74:27

There's a lot of questions we're really good at answering

play74:28

with computers.

play74:29

And some of them, we have genetic algorithms they're good

play74:32

for, some of them based in statistics, some of them

play74:34

formal logic, some of them basic simulation.

play74:37

But this is all--

play74:38

to me this is the core question, because this

play74:40

is what people decide, and no one

play74:42

seems to have ever tackled an [INAUDIBLE]..

play74:44

MARVIN MINSKY: Well, for instance,

play74:47

if somebody asks the question, you

play74:49

have to make up a biography of that person.

play74:53

So because the same question from different people

play74:57

would get really different answers.

play75:05

Why does a kettle make a noise when the water boils?

play75:11

If you know that the other person is a physicist,

play75:15

and it's easy to think of things to say, but--

play75:22

it's not a very good example.

play75:27

What's the context of that?

play75:34

In a human conversation, how does each person

play75:37

know what to say next?

play75:39

AUDIENCE: I guess one question is,

play75:42

how do people decide what evidence to use

play75:45

to tackle a problem?

play75:46

And I guess, the more fundamental question

play75:48

is, when people are solving problems,

play75:50

how do they decide how they're going

play75:52

to think about the problem?

play75:53

Are they going to think about it by visualizing it?

play75:56

Think about it by trying to [INAUDIBLE]

play75:58

Think about it by analogy or formal logic?

play76:02

Of all the tools we have, why do we pick the ones we do?

play76:08

MARVIN MINSKY: Yeah, well, that goes

play76:10

back to if you make a list of the 15 most common ways

play76:16

to think and somebody asks you a question or asks,

play76:24

why does such and such happen, how do you decide which

play76:29

of your ways to think about it?

play76:31

And I suspect that's another knowledge base.

play76:38

So we have commonsense knowledge about,

play76:42

you know, if you let go of an object, it will fall.

play76:46

And then we have more general knowledge

play76:55

about what happens when an object falls.

play76:58

Why didn't it break?

play77:01

Well, it actually did.

play77:04

Because here's a little white thing, which turned into dust.

play77:10

And so that's why I think you need to have five or six

play77:16

or how many different levels of representation.

play77:20

So as soon as somebody asks a question,

play77:24

one part of your brain is coming up with your first idea.

play77:31

Another part of your brain is saying,

play77:33

is this a question about physics or philosophy

play77:36

or is it a social question?

play77:41

Did this person ask it because they actually want to know

play77:44

or they want to trap me?

play77:52

So I think you--

play77:56

generally this idea of this--

play77:59

there must be many kinds of society of mind models

play78:03

that people have.

play78:03

And each person, whenever you're talking to somebody,

play78:11

you choose some model of what is this conversation about?

play78:15

Am I trying to accomplish something by this discussion?

play78:20

Is it really an interesting question?

play78:24

Do I not want to offend the person

play78:27

or do I want to make him go away forever?

play78:31

And little parts of your brain are making all these decisions

play78:35

for you.

play78:36

I'd like to introduce Bob Lawler, who's visiting.

play78:41

AUDIENCE: One of my favorite stories about Feynman,

play78:46

it comes from asking him to dinner one night.

play78:49

And I asked him how he got to be so smart.

play78:56

And he said that when he was an undergraduate here,

play78:59

he would consider every time he was able to solve a problem,

play79:05

just the beginning step of how to exploit that.

play79:09

And what he would then do would be

play79:10

to try to reformulate the problem

play79:13

in as many different representations as he could.

play79:18

And then use his solution of the first problem as a guide

play79:22

in working out alternate representations and procedures

play79:27

in that.

play79:28

The consequence according to him was

play79:30

that he became very good at knowing which

play79:33

was the most fit representation to use

play79:37

in solving any particular problem that he encountered.

play79:40

And he said that that's where his legendary capability

play79:45

in being so quick with good solutions and good methods

play79:48

for solutions came from.

play79:50

So maybe a criteria for an intelligent machine

play79:55

will be one that had a number of--

play79:57

15 different ways of thinking and applied them regularly

play80:04

to develop alternative information

play80:08

about different methods of problem solving.

play80:11

You would expect it then to have some facility at choosing

play80:16

based on its experience.

play80:17

MARVIN MINSKY: Yeah, he wrote something about--

play80:23

because the other physicists would

play80:25

argue about whether to use Heisenberg matrices

play80:29

or Schrodinger's equation.

play80:32

And he thought he was the only one who

play80:35

knew how to solve each problem both ways, because most

play80:40

of the other physicists would get

play80:42

very good at one or the other.

play80:47

He had another feature which was that if you argued with him,

play80:57

sometimes he would say, oh, you're right, I was wrong.

play81:04

Like he was once arguing with Fredkin

play81:06

about could you have clocks all over the universe

play81:11

that were synchronized.

play81:13

And the standard idea is you couldn't because of relativity.

play81:21

And Fredkin said, well, suppose you start out on Earth

play81:25

and you send a huge army of little bacteria-sized clocks

play81:33

and send them through all possible routes to every place

play81:37

and figure out and compensate for all the accelerations they

play81:47

had experienced on the path.

play81:50

Then wouldn't you get a synchronous time everywhere?

play81:53

And Feynman said, you're right, I was wrong--

play81:58

without blinking.

play82:05

He may have been wrong, but--

play82:31

More questions?

play82:38

AUDIENCE: Along the same line as his question

play82:39

about how do we know what method to use for solving problems.

play82:44

Kind of curious how we know what data set

play82:47

or what data to use when solving a problem.

play82:49

Because we have so much sensory information

play82:51

at any moment and so much data we have from experience.

play82:55

But like when you get a problem, you instantly--

play82:57

and I guess k-line is sort of a solution for that.

play82:59

But I'd be curious how you could possibly represent good data

play83:04

relationships in a way that a computer might be able to use.

play83:07

Because like right now, the problem

play83:08

is that we always have to very narrowly define

play83:11

a problem for a machine to be able to solve it.

play83:14

But I feel like if we could come up

play83:16

with good methods for filtering massive data

play83:19

sets to justify what might be relevant that doesn't involve

play83:22

like trial and error.

play83:25

MARVIN MINSKY: Yes, so the thing must

play83:30

be that if you have a problem, how do you characterize it?

play83:42

How do you think, what kind of problem is this

play83:46

and what method is good for that kind of problem?

play83:49

So I suppose that people vary a lot.

play83:52

And it's a great question.

play84:06

That's what the critics do.

play84:07

They say what kind of problem is this?

play84:10

How do I recognize this particular predicament?

play84:14

And I wish there were some psychologists who

play84:24

thought about that the way Newell and Simon did, god,

play84:28

in the 1960s.

play84:30

That's 50 years ago.

play84:32

How many of you have seen that book

play84:34

called Human Problem Solving.

play84:37

It's a big, thick book.

play84:38

And it's got all sorts of chapters.

play84:43

That's the one I mentioned the other day where they actually

play84:46

had some theories of human problem

play84:49

solving and simulated this.

play85:00

They gave subjects problems like this and said,

play85:06

we want you to figure out what numbers those are.

play85:09

And they lied to the subjects and said,

play85:12

this is an important kind of problem in cryptography.

play85:15

The secret agents need to know how

play85:19

to decode cryptograms of this sort, where usually it's

play85:25

the other way around.

play85:26

The numbers stand for letters.

play85:28

And there's some complicated coding.

play85:29

But these are simple cases.

play85:32

So you have to figure out that sort of thing.

play85:36

And then the book has various chapters

play85:39

on theories of how you recognize different kinds of problems

play85:43

and select strategies.

play85:46

And, of course, some people are better than others.

play85:49

And believe it or not, at MIT there was almost a whole decade

play85:56

of psychologists here who were studying the psychology

play86:01

of 5-person groups.

play86:08

Suppose you take five people and put them in a room

play86:12

and give them problems like this, or not the same cryptic,

play86:17

but little puzzles that require some cleverness to solve.

play86:23

And you record and video.

play86:26

They didn't have video in those days.

play86:29

So it was actual film.

play86:32

And there's a whole generation of publications

play86:38

about the social and cognitive behavior of these little groups

play86:45

of people.

play86:46

They zeroed in on 5-person groups for reasons

play86:49

I don't remember.

play86:51

But it turned out that almost always when

play86:53

you had the group divided into two competitive groups with two

play86:59

and three, every now and then they would reorganize.

play87:03

But it was more a study in social relations

play87:07

than in cognitive psychology.

play87:10

But it's an interesting book.

play87:15

There must be contemporary studies like that

play87:19

of how people cooperate.

play87:21

But I just haven't been in that environment.

play87:26

Any of you taken a psychology course recently?

play87:31

Not a one?

play87:34

Just wonder what's happened to general psychology.

play87:39

I used to sit in on Tauber and a couple of other lecturers here.

play87:44

And psychology, of course, was sort of

play87:46

like 20% optical illusions.

play87:51

AUDIENCE: Yeah, they still do that--

play87:52

MARVIN MINSKY: Stuff like that.

play87:53

AUDIENCE: They also concentrate a lot

play87:56

on development psychology.

play87:58

MARVIN MINSKY: Well, that's nice to hear,

play88:00

because I don't believe there was

play88:05

any of that in Tauber's class

play88:08

AUDIENCE: I think Professor Gabrieli now teaches

play88:10

the introductory psychology.

play88:13

And he--

play88:14

MARVIN MINSKY: Do they still believe Piaget or do

play88:16

they think that he was wrong?

play88:21

AUDIENCE: I think they probably take the same approach

play88:24

as with like Freud, they would say

play88:27

great ideas and a revolution, but they also don't

play88:31

think he's the end of the--

play88:36

MARVIN MINSKY: Well, he got--

play88:39

AUDIENCE: I know the childhood development class,

play88:43

you read Piaget, his books.

play88:46

MARVIN MINSKY: Yeah.

play88:47

In Piaget later years, he got algebra.

play88:51

And he wanted to be more scientific and studied

play89:00

logic and few things like that and became less scientific.

play89:05

It was sort of sad to--

play89:09

I can imagine being browbeaten by mathematicians,

play89:13

because they're the ones who were getting published.

play89:16

And he only had-- how many books did Piaget--

play89:23

AUDIENCE: But if I may add a comment about Piaget.

play89:26

It really comes from an old friend of many of us, Seymour.

play89:31

As you know, he was, of course, Piaget's mathematician

play89:34

for many years.

play89:35

MARVIN MINSKY: We got people from Piaget's lab.

play89:39

AUDIENCE: But Seymour said that he felt that Piaget's best

play89:42

work was his early work, especially

play89:45

like building his case studies.

play89:47

And one time when we were talking

play89:49

about the issue of focusing from the AI lab

play89:54

and worked on in psychology here,

play89:56

Seymour said he felt that was less than necessary than more

play90:00

of a concentration on AI, because he expected

play90:04

in the future the world of study of the mind

play90:07

would separate into two individual studies, one much

play90:11

more biological, like the neurosciences of today,

play90:16

and the other focus more on the structure of knowledge

play90:18

and on representations and in effect

play90:21

the genetic epistemology of Piaget.

play90:24

Then he added that something was a quote later.

play90:27

And it was, "Even if Piaget's marvelous theory today

play90:31

proved to be wrong, he was sure that whatever replaced

play90:35

it would be a theory that the same sort, one

play90:38

of the development of knowledge in all its changes."

play90:42

So I don't think people will get away from Piaget however much

play90:46

they want.

play90:47

MARVIN MINSKY: I don't think so either.

play90:51

I meant to introduce our visitor here, because Bob Lawler here

play91:04

has reproduced a good many of the kinds of studies

play91:09

that Piaget did in the 1930s and '40s.

play91:14

And if you look him up on the web--

play91:18

you must have a few papers.

play91:20

AUDIENCE: I better tell you what the website is, because it

play91:22

still hidden from web prose.

play91:25

It's nlcsa.net.

play91:27

MARVIN MINSKY: That would be hard to--

play91:31

AUDIENCE: Natural Learning Case Study Archive dot net.

play91:43

It's still in process, still in development.

play91:46

But it's worth looking at.

play91:48

MARVIN MINSKY: How many children did Piaget have?

play91:50

AUDIENCE: Well, Piaget had three children--

play91:52

MARVIN MINSKY: So did you--

play91:53

AUDIENCE: Not in his study.

play91:54

But what he did was to mix together

play91:58

the information from all three studies

play92:00

and supported the ideas with which he began.

play92:03

So it was illustrations of his theories.

play92:09

MARVIN MINSKY: Anyway, Bob, has quite a lot of studies

play92:11

about how his children developed concepts of number and geometry

play92:18

and things like that.

play92:20

And I don't know of anyone else since Piaget

play92:24

who has continued to do those sorts of experiments.

play92:29

There were quite a lot at Piaget's institute in Geneva

play92:34

for some years after Piaget was gone.

play92:38

But I think it's pretty much closed now, isn't it?

play92:41

AUDIENCE: Well, the last psychologist Piaget

play92:43

hired Jacques Benesch, who was no longer at the university.

play92:50

He retired.

play92:51

And it has been taken over by the neo-Piagetians, who

play92:55

are doing something different.

play92:58

MARVIN MINSKY: Is there any other place?

play93:00

Well, there was Yoichi's lab on children in Japan.

play93:07

AUDIENCE: There are many people to take Piaget seriously

play93:13

in this country and others.

play93:21

AUDIENCE: So Robert mentioned that Feynman

play93:25

had more representations of the world than like usual people.

play93:33

Like when I talked about Eisen and the glial cells,

play93:38

I referred to that because I believe

play93:41

that k-lines is our way of representing the world.

play93:45

And maybe Eisen had better ways of representing the world.

play93:51

And I believe that, for example, agents as resources

play93:58

are not different from Turing machines.

play94:00

You can create a very simple Turing

play94:03

machine that act like agents, and you

play94:06

have some mental states.

play94:09

But there is no, I believe, good way of representing the world

play94:19

and updating the representation of the world.

play94:23

Like it seems to me that when you grow up,

play94:29

you are learning how to represent

play94:33

the world better and better.

play94:34

And you have some layers.

play94:36

And that's all k-lines.

play94:39

And if glial cells are actually related to k-lines,

play94:48

it means that Eisen had like a better hardware

play94:53

representing the world.

play94:55

And that's why he would be smarter than other people.

play95:00

MARVIN MINSKY: Well, it's hard to--

play95:08

I'm sure that that's right that you

play95:11

have a certain amount of hardware,

play95:15

but you can reconfigure some of it.

play95:23

Nobody really knows.

play95:25

But some brain centers may have only a few neurons.

play95:29

And maybe there's some retrograde signals.

play95:36

So that if two brain centers are simultaneously activated,

play95:42

then usually the signals only go one wave,

play95:48

from one to the other.

play95:50

Have to go through a third one to get back.

play95:53

But it could be that the brain--

play95:58

that the neurons have property that if two centers are

play96:01

activated, maybe that causes more connections

play96:04

to be made between them that can then be programmed more.

play96:08

I don't think anybody really has a clear idea of whether you

play96:15

can grow new connections between brain centers

play96:18

that are far apart.

play96:19

Does anybody know?

play96:20

Is there anything--

play96:24

AUDIENCE: It used to be common knowledge

play96:25

that there was no such thing as adult neurogenesis.

play96:28

And now it is known that it exists

play96:30

in certain limited regions of the brain.

play96:32

So in the future, it may be known

play96:33

that it exists everywhere.

play96:35

MARVIN MINSKY: Right.

play96:35

Or else that those experiments were wrong.

play96:40

And they were in a frog rather than a person.

play96:50

AUDIENCE: Lettvin claimed that you

play96:51

could take a frog's brain out and stick it in backwards

play96:55

and pretty soon it would behave just like it used to.

play96:58

MARVIN MINSKY: Lettvin said?

play96:59

AUDIENCE: Yeah.

play97:00

Of course.

play97:02

I don't know if he was kidding or not.

play97:04

You never could tell.

play97:05

MARVIN MINSKY: You could never tell when he was kidding.

play97:14

Lettvin was a neuroscientist here

play97:17

who was sort of one of the great all time neuroscientists.

play97:24

He was also one of the first scientists

play97:28

to use transistors for biological purposes

play97:34

and made circuits that are still used in every laboratory.

play97:38

So he was a very colorful figure.

play97:43

And everyone should read some of his older papers.

play97:49

I don't know that there were any recent ones.

play97:52

But he had an army of students.

play98:00

And he was extremely funny.

play98:06

What else?

play98:10

AUDIENCE: So continuing on the idea of hardware

play98:12

versus software, what do you think about the idea

play98:16

that intelligence or humans may need strong instincts as when

play98:23

they're born in order like-- hence

play98:25

the interplay between their instincts,

play98:27

like they know to cry when they're hungry

play98:29

or to look for their mother.

play98:31

They need these instincts in order

play98:32

to develop higher orders of knowledge.

play98:37

MARVIN MINSKY: You'd to ask L Ron Hubbard for--

play98:50

I don't recall any real attempts to--

play99:03

I don't think I've ever run across anybody claiming

play99:09

to have correlations between prenatal experience

play99:13

and the development of intelligence.

play99:18

AUDIENCE: That's not what I'm talking about.

play99:20

I'm talking about before intelligence

play99:23

is being developed, like you learn language,

play99:25

before you learn language, you need

play99:26

to have a motivation to do something.

play99:28

So you need to have instincts, instinctual reactions

play99:31

to things.

play99:32

Like traditional experience with knowledge

play99:34

after you're born, you--

play99:36

MARVIN MINSKY: Well, children learn language,

play99:38

you know, 12 to 18 months.

play99:42

What are you saying that they need some preparation?

play99:49

I'm not sure what you're asking.

play99:51

AUDIENCE: So think of it from an engineering point of view.

play99:53

If you were to build like a robot, what you need to program

play99:58

is some instincts, some like rule of thumb

play100:03

algorithms in order to get it started in the world

play100:06

in order to build experiential knowledge.

play100:08

MARVIN MINSKY: You might want to build something

play100:10

like a difference engine, so that you can represent a goal

play100:14

and it will try to achieve it.

play100:16

So you need some engine for producing any behavior at all.

play100:22

AUDIENCE: Right.

play100:22

So like if you take the approach that like maybe to build an AI,

play100:26

you should build like an infant robot

play100:29

and then you teach it as you would like a human child.

play100:33

Then would it be useful to make it

play100:35

dependent on like some other figure

play100:37

in order to help it learn how to do things like a human child

play100:41

would?

play100:46

MARVIN MINSKY: Well, in order to learn,

play100:48

you have to learn from something.

play100:50

And one way to learn is in isolation, just to have some--

play100:57

you could build a goal to predict what will happen.

play101:01

And the best way to predict, as Alan Kay put it once,

play101:06

the best way to predict the future is to invent it.

play101:10

So you could make a--

play101:17

or could put a model of an adult in it to start with, so that--

play101:26

in other words, one way to make a very smart child

play101:29

is to copy its mother's brain into a little sub-brain when

play101:34

it's born.

play101:35

And then it could learn from that instead

play101:37

of depending on anybody else.

play101:39

I'm not sure-- you have to start with something.

play101:54

Of course, humans, as Bob mentioned or someone mentioned,

play102:02

if you take a human baby and isolate it,

play102:06

it looks like it won't develop language by itself, because--

play102:14

I don't know what because.

play102:22

In fact, I remember one of our children

play102:25

who was just learning to talk.

play102:27

And something came up, and she said, what because is that.

play102:36

Do you remember?

play102:37

It took a while to get her to say why.

play102:43

She would come up and say what because.

play102:46

And I would say, you're asking why did this.

play102:52

After a long time she got the hint.

play102:54

But-- why do all w-h words start with w-h?

play103:02

AUDIENCE: One of them doesn't--

play103:04

how.

play103:06

MARVIN MINSKY: Could you say whow?

play103:12

How.

play103:15

Is there a theory?

play103:16

AUDIENCE: Not that I know of.

play103:18

MARVIN MINSKY: It's a basic sound

play103:20

telling you're making a query before you

play103:22

can do the rising inflection.

play103:26

It's interesting.

play103:27

Is it true in French?

play103:30

Quoi?

play103:32

The land of the silent letter.

play103:40

Anybody know what's the equivalent of w-h words

play103:45

in your native language?

play103:47

AUDIENCE: N.

play103:48

MARVIN MINSKY: What?

play103:48

AUDIENCE: N.

play103:49

MARVIN MINSKY: N?

play103:50

AUDIENCE: Yeah.

play103:50

MARVIN MINSKY: In what?

play103:52

AUDIENCE: Turkish.

play103:52

MARVIN MINSKY: Really?

play103:54

They all start with n?

play103:55

Wow.

play104:00

Interesting.

play104:03

Maybe the infants have an effect on something.

play104:08

Do questions in Turkish end with a rise?

play104:12

AUDIENCE: Yeah.

play104:14

So only the relevant w-h questions--

play104:19

OK, all questions end in kind of an inflection.

play104:22

But normally, you have a little kind of little word

play104:26

that you would put at the end of any sentence

play104:29

to make it into a question, except for the w-h questions,

play104:35

which are standalone one.

play104:36

You don't them.

play104:39

MARVIN MINSKY: Yes, you'd say, this is expensive?

play104:43

They don't need the w-h if you do enough of that.

play104:50

Huh.

play104:54

So question, is that in the brain at birth?

play105:01

AUDIENCE: Is that pattern mirrored in English where

play105:04

you can say, is this expensive?

play105:05

But if you can say how expensive is this

play105:08

without that rising intonation.

play105:10

It mirrors using the separate word,

play105:14

but you don't need that separate word if it's an end word.

play105:17

AUDIENCE: But if you're saying how expensive

play105:19

is this without the question inflection,

play105:22

it almost sounds like you're making

play105:24

a statement about just how ridiculously expensive it is.

play105:28

Like you're going, how expensive is this

play105:30

versus how expensive is this?

play105:55

MARVIN MINSKY: Well, I should let you go.

Rate This

5.0 / 5 (0 votes)

Related Tags
人工智能认知科学神经科学心理学MITMarvin Minsky知识表示问题解决语言学习意识计算模型
Do you need a summary in English?