8. Question and Answer Session 2

MIT OpenCourseWare
4 Mar 2014109:14

Summary

TLDR在这段视频脚本中,Marvin Minsky,人工智能领域的先驱之一,与观众进行了深入的对话。他讨论了教育模式的变化,包括国际学生的表现和女性在科技领域的比例增长。Minsky 教授分享了对过去几十年学生模式的观察,并对当前学生成为研究人员或教师的比例下降表示担忧。他回顾了1960年代的学术和研究机构的增长,包括通用汽车和IBM在内的公司如何支持基础研究。此外,Minsky 教授还探讨了意识、认知心理学、人工智能的未来,以及如何从经验中提取知识。他对于人工智能是否能够通过集体智慧或社区驱动的研究取得进展持开放态度,但也指出了这种方法在需要创造性解决方案的问题上的局限性。整个讨论涉及了广泛的主题,展示了Minsky 对人工智能、认知科学和教育的深刻见解。

Takeaways

  • 📚 马文·明斯基(Marvin Minsky)认为,尽管他教书多年,但并没有固定的讲座模式,而是鼓励学生提出问题并进行探讨。
  • 🌍 明斯基观察到不同国家的学生有不同的教育背景,他提到外国学生似乎比美国学生接受的教育更好。
  • 👩‍🎓 他注意到MIT的女生比例从他加入时的20%增加到了48%或53%,显示了性别比例的变化。
  • 🎓 明斯基提到,过去大多数学生最终成为研究人员或教职员工,但现在这样的学生变得很少,他不确定其中的原因。
  • 📈 他回忆说,20世纪60年代的大学和研究机构还在成长,很多大型研究机构如IBM和通用汽车都支持基础研究。
  • 🔬 明斯基讨论了他在RAND公司的时光,那里有很多基础研究,并且有一个自由的学术环境。
  • 🧠 他提出了关于意识和思维的问题,质疑了当时流行的左脑/右脑人格类型理论,并认为这种分类过于简化。
  • 🤖 关于人工智能,明斯基讨论了进化策略在创造AI方面的局限性,特别是因为进化过程不记住失败的教训。
  • 🧬 他提出了对遗传学的看法,包括对致命隐性基因的清除,以及这可能对未来人类健康的影响。
  • 🌐 明斯基还探讨了信息集成与意识的关系,以及大脑如何处理和整合信息。
  • ⚙️ 最后,他谈到了AI研究的组织问题,包括如何通过社区和协作来推进AI技术的发展。

Q & A

  • 马文·明斯基在对话中提到了哪些因素影响学生的教育质量?

    -马文·明斯基提到外国人相比美国本地人似乎受到了更好的教育。他还提到了性别比例的变化,指出当他刚到麻省理工学院时,女生大约只占20%,而现在大约是48%。

  • 明斯基教授如何看待过去几十年学生群体的变化?

    -明斯基教授观察到学生群体中女性比例的增加,以及国际学生相比美国本地学生似乎受到了更好的教育。他还注意到,过去大多数学生最终会成为研究人员或教职员工,但现在这样的学生变得很少。

  • 明斯基教授认为为什么现在的学生相比以前更少成为研究人员或教职员工?

    -明斯基教授提出了几个可能的原因,包括1960年代大学和研究机构的增长,以及像IBM和通用汽车这样的大公司支持的基础研究。他提到现在这样的研究机构变少了,而且即使是像CBS实验室和西屋电气这样的机构,现在也不如以前。

  • 明斯基教授对于学术职位的减少和学术职业的困难有何看法?

    -明斯基教授表达了对学术职位减少的担忧,认为这使得想要成为教授的职业道路变得更加困难。他提到,许多人很早就意识到这一点,并转而在华尔街等地寻找工作,同时在业余时间进行研究。

  • 在对话中,明斯基教授提到了哪些历史上的研究机构或公司对AI和计算机科学有重要贡献?

    -明斯基教授提到了IBM、通用汽车、CBS实验室、西屋电气、斯坦福研究所和RAND公司等,这些机构在早期对计算机科学和人工智能的研究有重要贡献。

  • 明斯基教授对于意识的看法是什么?

    -明斯基教授对于意识的看法是,他认为意识不是一个中心的谜团,而是当一个好的心理学理论出现时,意识的问题将会自然解决。他批评了一些哲学家和心理学家将某些感知体验(如颜色的感知)视为不可分割的基本问题,即所谓的感受性(qualia)。

  • 明斯基教授对于进化在创造人工智能方面的潜力有何看法?

    -明斯基教授认为,尽管进化过程可以产生复杂的生命形式,但它在创造人工智能方面有局限性。他指出,进化过程中没有记录下失败的突变,这导致致命的突变最终总会杀死某些个体。他认为,如果我们能够从失败中学习并避免同样的错误,那么在人工智能的开发中可能会更有效。

  • 明斯基教授如何看待当前人工智能研究的进展?

    -明斯基教授认为,尽管有许多资金投入到人工智能研究中,但很多都流向了错误的方向。他担心大部分研究都集中在解决简单问题上,而不是更困难的问题。他提出,需要更多创新和创造性的方法来推动人工智能领域的发展。

  • 明斯基教授对于人工智能未来可能参与政府领导的想法是什么?

    -明斯基教授以幽默的方式回应了这个问题,提到在许多科幻故事中,人工智能已经成为政府的领导者,但他认为这更多是一种虚构的设想。

  • 在对话中,明斯基教授是否提到了关于如何改进人工智能系统的具体建议?

    -明斯基教授提到了一些关于改进人工智能系统的想法,例如通过更高层次的反思和自我反思来改进系统。他提到了在人工智能系统中加入“批评者”(critics)的概念,这些批评者可以指出系统中的错误并防止它们再次发生。

  • 明斯基教授对于人工智能在医学领域的应用有何看法?

    -明斯基教授提到了Doug Lenat在医学领域的一个成功案例,其中人工智能系统被用来回答医生的查询,例如关于手术并发症的问题。他认为这个系统在克利夫兰诊所得到了医生们的喜爱,并在杂志上被提及,是一个真正的成功案例。

  • 明斯基教授是否讨论了人工智能研究的资助问题?

    -明斯基教授讨论了人工智能研究的资助问题,他提到很多资金流向了简单的问题,而不是更难的问题。他还提出了一个想法,即给程序员一笔钱,看他们是否能够解决某个特定的人工智能问题。

Outlines

00:00

😀 对话开场与MIT OpenCourseWare的支持

开场时提到内容在创意共享许可下提供,并鼓励支持MIT OpenCourseWare以持续提供免费的高质量教育资源。Marvin Minsky提到自己没有准备讲座,而是接受观众的随机问题。观众询问Minsky作为长期教师是否注意到学生中存在模式,Minsky提到外国学生似乎比美国学生更有教育,MIT的女生比例也有所增加。

05:01

😉 学术趋势与个人见解

Minsky讨论了他观察到的学生变化,包括学生毕业后的职业道路选择。他回忆了1960年代的学术和研究机构的增长,提到了包括IBM在内的几个大型研究机构,并表达了对当前研究状况的担忧。他还提到了RAND公司,并询问了观众对该公司的看法。

10:03

🤔 学术职业的挑战与台湾的学术发展

Minsky讨论了学术职业的挑战性,指出现在的学术职位比过去更难获得。他提到了台湾在数学领域的部门扩张,并询问观众对此事的看法。他还提到了意大利政府对AI研究小组的影响,以及历史上个别研究者如Isaac Newton的例子。

15:03

🧐 心理学与认知机制

Minsky探讨了心理学和认知机制的话题,包括他对心理学如何从逻辑学发展而来的看法。他提到了David Hume、Spinoza和Kant等哲学家对认知理论的贡献,并讨论了心理学作为一门学科的形成。

20:07

😶 大脑与意识的对话

Minsky和观众就大脑、意识和记忆的工作方式进行了深入讨论。他们讨论了左右脑的区别、意识的定义以及信息整合与意识的关系。Minsky对意识作为一个难以定义的概念表示了怀疑,并提出了对心理学中‘困难问题’的不同看法。

25:08

🤓 心理学理论的实用性

Minsky质疑了一些心理学理论的实用性,特别是关于意识和思考的理论。他强调了在心理学研究中寻找明确问题和解决方案的重要性,并批评了一些哲学观点对心理学研究的潜在危害。

30:09

😲 遗传、进化和人工智慧

Minsky和观众讨论了遗传学、进化论和人工智能之间的关系。他们探讨了遗传学中的隐性基因、进化过程中的挑战以及人工智能发展的可能性。Minsky预测,未来的技术可能会清除人类基因组中的致命隐性基因。

35:11

🧬 遗传信息与细菌共生

Minsky提出了关于遗传信息和细菌共生的话题,探讨了人体肠道中细菌的基因组可能对人体健康的影响。他们讨论了细菌与人类细胞的比例,并提出了关于细菌在人类进化中作用的假设。

40:13

🤖 人工智能的未来

Minsky对人工智能的未来进行了展望,讨论了AI在解决特定问题上的潜力和局限性。他还提到了AI在政府领导中的可能性,并探讨了如何通过计算机和编程方法来理解和构建智能系统。

45:14

🧐 心理学与AI的关系

Minsky讨论了心理学与人工智能之间的关系,他认为虽然心理学的知识有助于理解人工智能,但AI的发展并不迫切依赖于心理学的进步。他强调了AI研究的孤立性,并呼吁更多的跨学科合作。

50:15

😏 AI研究的挑战与机遇

Minsky和观众讨论了AI研究的挑战,包括资金分配、研究激励结构以及如何吸引更多的人才参与AI研究。他们还讨论了AI研究的创新性以及如何通过合作和开源项目来促进AI的发展。

55:16

😮 人工智能的创造性与开源

Minsky对人工智能的创造性和开源项目的可能性进行了探讨。他提出了关于如何通过社区合作来解决复杂的编程问题的想法,并讨论了开源项目在促进创新方面的潜力和局限性。

Mindmap

Keywords

💡MIT OpenCourseWare

麻省理工学院开放课程项目(MIT OpenCourseWare, OCW)是一个旨在让公众免费获取麻省理工学院课程材料的倡议。在视频中,提到了对MIT OCW的支持,以帮助其继续提供高质量的教育资源。

💡认知心理学

认知心理学是心理学的一个分支,研究人类的认知过程,包括感知、记忆、思考、语言和解决问题等。视频中提到了认知心理学的发展,以及它对理解人工智能的潜在贡献。

💡人工智能

人工智能(Artificial Intelligence, AI)是指由人工制造的系统所表现出来的智能行为。视频中多次提到人工智能,讨论了其发展、应用以及与人类认知的关系。

💡进化算法

进化算法是一种模仿生物进化过程的算法,通过变异和选择来优化问题解。视频中讨论了进化算法在人工智能领域的应用及其局限性。

💡神经网络

神经网络是一种模仿人脑神经元网络的计算模型,用于处理复杂的数据和模式识别。视频中提到了神经网络在人工智能中的应用和早期研究。

💡意识

意识通常指的是个体的觉知状态和对周围环境及自身思想、情感的认知。视频中探讨了意识的概念及其在人工智能中的模拟问题。

💡群体智慧

群体智慧是指通过集合多个体的意见或行为来解决问题或作出决策的现象。视频中提到了是否可以通过网络上的群体来共同解决复杂的编程问题。

💡认知科学

认知科学是一个跨学科领域,它研究人类的认知和智力过程,包括感知、思考、记忆、语言和意识等。视频中提到认知科学与人工智能的关系,以及它对理解智能的贡献。

💡问题解决

问题解决是指识别问题、生成解决方案并执行以达成目标的过程。视频中讨论了人工智能在模拟人类问题解决能力方面的进展和挑战。

💡信息表示

信息表示是指在计算机科学和人工智能中,数据或概念如何被编码和存储的方式。视频中探讨了不同的信息表示方法对于人工智能系统性能的影响。

💡计算模型

计算模型是指用于模拟或理论化计算过程的抽象模型。视频中提到了不同的计算模型,如神经网络和进化算法,以及它们在人工智能领域的应用。

Highlights

MIT OpenCourseWare 通过捐赠和额外材料支持高质量教育资源的免费提供。

Marvin Minsky 观察到外国学生相比美国学生似乎受过更好的教育。

MIT 女生比例的增长,从 Minsky 刚到 MIT 时的20%增长到现在的48%。

Minsky 讨论了学生毕业后的职业道路变化,以及成为研究人员或教职员工的比例减少。

60年代的大学和研究机构如 IBM 实验室的增长对基础研究的支持。

Minsky 对当前研究资助模式的批评,包括资金的年度更新和频繁的报告要求。

Minsky 讨论了学术职业的挑战性,以及学生早期认识到这一点并转向其他领域如华尔街工作的趋势。

Minsky 探讨了台湾新成立的数学系和政府决策对研究成功的影响。

Minsky 对意识的讨论,包括对 Umberto Eco 意识观念的批评。

Minsky 对心理学和认知科学领域的贡献,以及它们与 AI 研究的关系的看法。

Minsky 对于通过进化方法创造人工智能的可能性和局限性的讨论。

Minsky 讨论了人类基因组中的非编码区域,包括旧病毒的残留和致命基因的携带。

Minsky 对于未来基因组编辑技术可能带来的变革,如消除致命隐性基因的展望。

Minsky 对于人工智能在政府领导选举中的可能性的看法。

Minsky 对于人工智能研究中的创造性和工程性问题的看法,以及心理学对 AI 发展的潜在贡献。

Minsky 讨论了人工智能研究的资助问题,以及资金如何可能流向易于解决的问题而非难题。

Minsky 对于人工智能玩游戏和拖延的可能性的看法。

Minsky 探讨了如何从经验中提取知识,并将其转化为规则或学习的问题。

Minsky 对于人工智能研究是否能够通过众包方式进行的讨论。

Transcripts

play00:00

The following content is provided under a Creative

play00:02

Commons license.

play00:03

Your support will help MIT OpenCourseWare

play00:06

continue to offer high quality educational resources for free.

play00:10

To make a donation or to view additional materials

play00:12

from hundreds of MIT courses, visit MIT OpenCourseWare

play00:16

at ocw.mit.edu.

play00:22

MARVIN MINSKY: Well, I don't have a lecture.

play00:29

Go ahead.

play00:29

AUDIENCE: I had a random question.

play00:31

MARVIN MINSKY: Great.

play00:32

AUDIENCE: So you've been a teacher for a very long time.

play00:34

Have you noticed any patterns in the students

play00:37

over the years or decades?

play00:40

MARVIN MINSKY: Have I noticed any pattern in students?

play00:43

AUDIENCE: Yeah, like intellectual patterns or just

play00:47

people you're interested in, just anything.

play00:50

MARVIN MINSKY: Well, a few.

play00:55

The foreigners seem better educated than the Americans.

play01:02

There are more girls.

play01:05

When I came to MIT, it was about 20%.

play01:09

And I think now it's 53%.

play01:11

Does anyone know?

play01:14

AUDIENCE: It's like 48%.

play01:16

MARVIN MINSKY: What?

play01:17

AUDIENCE: 48%.

play01:18

MARVIN MINSKY: 48%?

play01:21

I read that it actually went past 50 for a few minutes.

play01:26

AUDIENCE: [LAUGHS]

play01:31

MARVIN MINSKY: No, I think I've complained about the future

play01:34

though, which is that a large proportion of my students,

play01:44

by students I mean the ones whose thesis--

play01:50

I hate to say supervised, because in the case of Pat

play01:58

Winston, for example, I learned much more than I--

play02:04

or Sussman.

play02:07

But most of the students became researchers or faculty members

play02:15

eventually.

play02:16

And now it varies.

play02:22

Now very few of them do.

play02:27

I'm not sure of all the reasons.

play02:32

In the 1960s, which is a long time ago,

play02:36

the universities were still growing,

play02:41

as an after effect of World War II, I suppose.

play02:45

I really don't know what caused these major trends.

play02:53

But there were also a lot of career research institutions

play03:02

that were large and growing.

play03:06

Even General Motors had places where

play03:09

there was some basic research.

play03:11

IBM was a big research laboratory

play03:15

that was supporting some very abstract and basic research

play03:23

of various sorts.

play03:24

I don't think there's very much of that now.

play03:29

Even CBS Laboratory.

play03:31

Westinghouse was doing interesting robotics.

play03:34

And of course Stanford Research Institute,

play03:41

which had no relation to Stanford.

play03:45

Still exists, and it's still pretty good.

play03:49

But in those early days, it was one of the three or four

play03:55

richest computer science and artificial intelligence

play04:01

research places.

play04:02

There a place called the RAND Corporation,

play04:05

which I think still exists.

play04:06

Does anybody--

play04:07

AUDIENCE: Yeah.

play04:08

MARVIN MINSKY: I don't know what it does.

play04:10

Any idea?

play04:11

AUDIENCE: They do government [INAUDIBLE]

play04:15

sort of things, just in terms of writing and [INAUDIBLE]

play04:19

AUDIENCE: They make some pretty important things but not

play04:22

necessarily about war, economy games, or politic [INAUDIBLE]

play04:27

MARVIN MINSKY: But in the 60s, it had a lot of basic research.

play04:37

It had Newell and Simon and me and a few other people.

play04:46

And we just went there, and you could

play04:50

walk on the beach in Santa Monica and go to your office

play04:53

and talk and do things.

play04:55

And no one ever bothered us.

play04:57

And we wrote lots of little papers.

play05:01

Anyway, grumble, grumble.

play05:04

Another feature was that places like the National

play05:10

Institute of Health had five year fellowships.

play05:15

And now you have to renew--

play05:17

there are very few appointments of that sort anywhere.

play05:22

And usually, no sooner do you get

play05:25

funded than you're starting to write

play05:28

proposals for the next year.

play05:31

And some people want reports every quarter.

play05:37

And Neil Gershenfeld, who was running a big lab here,

play05:42

wanted reports every month.

play05:44

And some of us finally gave up on that.

play05:50

That's a long answer.

play05:54

So if you want a career in being a professor,

play06:00

it's just harder to find now than it was then.

play06:05

And so a lot of people recognize this pretty early

play06:08

and find some place to work in Wall Street and stuff

play06:13

like that.

play06:14

There are lots of jobs for smart people.

play06:17

But then you have to sneak your research in on the side.

play06:25

Anybody can think of a way to fix it?

play06:28

[LAUGHTER]

play06:31

In the last 20 years, Taiwan made 100 new math departments

play06:35

I read somewhere.

play06:37

I don't know if any of you who know anything about Taiwan.

play06:42

I just wonder if that--

play06:44

AUDIENCE: Yeah.

play06:46

MARVIN MINSKY: Yes, were they successful?

play06:49

[LAUGHTER]

play06:51

Is there a lot of research there?

play06:53

AUDIENCE: No.

play06:56

MARVIN MINSKY: Very often, when a government

play06:58

decides on the right thing to do, it doesn't work.

play07:05

I had some friends in Italy who were

play07:07

trying to start an AI group.

play07:09

And they had accumulated a critical mass in--

play07:15

what's the big city in the north--

play07:19

Milan.

play07:21

And then some government committee

play07:25

said, oh, there is a bunch of computer sciences there,

play07:30

but there's no good computer scientists in Pisa and Verona.

play07:34

So the government can order a professor to leave one place

play07:41

and go somewhere else.

play07:43

So the next year, there were no groups.

play07:47

And occasionally, there are people like Isaac Newton

play07:51

who liked to work alone.

play07:53

[LAUGHTER]

play07:56

But I got the impression that the product

play08:01

of the Italian researchers diminished after that.

play08:06

Might be wrong.

play08:10

How about a more technical question?

play08:23

Thanks.

play08:23

AUDIENCE: It looks like you had a complicated diagram

play08:25

concerning story.

play08:28

Do you recall of any layers?

play08:32

MARVIN MINSKY: Yeah.

play08:34

AUDIENCE: Was that meant to be a bi-directional diagram?

play08:36

Because it worked from the bottom

play08:37

up as well as the top down.

play08:40

MARVIN MINSKY: I'm confused about whether that--

play08:42

let's see if I can find it.

play08:44

Why did this shut down?

play08:51

Do I dare press start?

play08:52

AUDIENCE: It's alive on the screen.

play08:55

MARVIN MINSKY: Oh my gosh.

play08:56

[LAUGHTER]

play08:58

I never saw that phenomenon before.

play09:05

AUDIENCE: Could you do [INAUDIBLE] displays?

play09:08

MARVIN MINSKY: Yeah, I can.

play09:09

[LAUGHTER]

play09:11

AUDIENCE: There should be a button that changes [INAUDIBLE]

play09:14

AUDIENCE: Oh, here we go.

play09:15

MARVIN MINSKY: What?

play09:16

Did it go on?

play09:17

AUDIENCE: [INAUDIBLE]

play09:27

MARVIN MINSKY: Oh, it's up.

play09:29

Oh well.

play09:38

It might be in this random lecture.

play09:51

How do I get rid of those?

play09:56

AUDIENCE: I think you might be able to go into View at the top

play09:59

to get rid of it.

play10:02

MARVIN MINSKY: There's a sort of bug in the tool box

play10:10

thing on the Macintosh, which is,

play10:12

if you make one of these too long,

play10:14

there's no way to get rid of it except to restart

play10:18

the machine in some other mode.

play10:21

I can't catch it.

play10:27

Maybe this works.

play10:35

Oh well.

play10:37

That diagram, there's two hierarchical diagrams.

play10:41

The theme of the emotion machine book

play10:44

is mostly the six layers of instinctive, built-in

play10:49

reactions, learned, conditioned reactions, and going up

play10:57

to reflective and self reflective and so on.

play11:01

And the other diagram starts out with just a neural net,

play11:05

and then things like K-lines, which

play11:08

are ways to organize groups of activities,

play11:12

and then frames and trans frames.

play11:16

A trans frame is a way of representing knowledge

play11:21

in terms of how an action effects

play11:25

a situation or a particular situation

play11:28

and an action produces a new one.

play11:31

And then a story is usually a chain of trans frames.

play11:36

And of course, a meaningful story

play11:37

is one which I didn't have a level for,

play11:42

good stories and useless stories.

play11:47

So somewhere at a very high level,

play11:53

we all have knowledge of, if you're

play11:55

facing some sort of problem, what kind of strategy

play12:01

might be good for solving that kind of problem?

play12:05

And in that case, each layer is made

play12:09

of things in the lower layer.

play12:12

Whereas in the society of mind hierarchy,

play12:19

each layer does different things that

play12:21

operates on the result of the other layers.

play12:27

I guess if you look at any mechanism,

play12:29

you'll have a diagram of what the parts do

play12:32

and how they relate.

play12:33

And you'll have a diagram of which

play12:36

isn't in the machinery, of what are

play12:39

the functions of the different sets of parts

play12:41

and how are those functions related?

play12:44

So that might be a bug in both books,

play12:49

that I drew the diagrams to look pretty similar.

play12:58

It's a bad analogy.

play13:02

AUDIENCE: [INAUDIBLE] was it a stimulus-response model,

play13:05

where if you fed a story into it, beneath it

play13:08

were the interpretive mechanisms?

play13:10

But does it flow the other way?

play13:12

Is it generative from bottom to top as well?

play13:15

MARVIN MINSKY: Well, in some sense,

play13:16

this trans frame says, here's a piece of knowledge, which says,

play13:22

if you're in such a situation, this is a way

play13:25

to get to another situation.

play13:28

In the traditional behavioristic--

play13:33

behaviorist is a word for the class of generations

play13:40

of psychologists who tried to explain behavior just in terms

play13:45

of reacting to situations.

play13:49

And that wasn't connected to--

play13:56

what am I trying to say?

play14:00

In the standard behaviorist models,

play14:04

which were occupied most of psychology

play14:07

from the 19th century up to the 1950s when modern cognitive

play14:16

psychology really started, you just looked at the animal

play14:22

as a collection of reactions.

play14:25

And then in cognitive psychology,

play14:27

you start to look at the animal as having goals and problems.

play14:32

And then some machinery is used to go from your--

play14:42

the way you describe your situation,

play14:44

to generating a plan for what you're going to do about that.

play14:48

And then the plan ends up being made

play14:51

of little actions, of course.

play14:53

But before 1950, there were only a few psychologists who--

play15:03

and philosophers, I should say, going all the way back

play15:06

to people like David Hume and Spinoza

play15:11

and maybe Emmanuel Kant.

play15:14

They made up-- if you read their stuff and ignore

play15:19

the philosophy, you see that there was a very slow progress

play15:25

over really three centuries of trying to get from logic,

play15:32

which sort of first appears around the time of Leibniz--

play15:37

when is Leibniz?

play15:38

1650 or so?

play15:41

AUDIENCE: [INAUDIBLE].

play15:45

MARVIN MINSKY: Around, yes.

play15:46

They never met, I believe.

play15:51

So a lot of philosophy has--

play15:56

which I don't know how to describe the rest of it.

play16:00

But a lot of it is making--

play16:03

trying to make high level theories of how thinking works.

play16:07

And it's, of course, mixed with all sorts

play16:09

of problems about why the world exists and ethics

play16:13

and what are good things to do and bad and all sorts of mixed

play16:19

up things.

play16:20

And psychology doesn't appear--

play16:23

I don't think there's a name for that field

play16:26

until the 1880s or so.

play16:32

Who's the first psychologist you can think of?

play16:36

AUDIENCE: William James.

play16:37

MARVIN MINSKY: William James is around 1890.

play16:42

There's a guy named [INAUDIBLE] in Austria, I think.

play16:46

Sigmund Freud starts publishing around 1890.

play16:51

Francis Galton in England is maybe

play16:54

the first recognizable psychologist.

play16:57

He has a big book called An Inquiry Into Human Faculty

play17:03

which makes good reading right now.

play17:09

Because it has-- each chapter is about a different aspect

play17:12

of what would be called modern cognitive psychology.

play17:16

How do people recognize things?

play17:20

What kinds of memory cues do you use to retrieve stuff?

play17:25

All sorts of sort of--

play17:30

they're like term papers, the chapters.

play17:34

Some little theory.

play17:35

And you'd say, I can do better than that.

play17:37

And indeed, you could.

play17:39

But at that time, no one could.

play17:41

Yes?

play17:42

AUDIENCE: I feel like psychology is

play17:43

thinking about how people think, which I think [INAUDIBLE]..

play17:47

Aristotle does it.

play17:50

MARVIN MINSKY: Aristotle has more good ideas than,

play17:54

as far as I'm concerned, everyone else put together

play18:00

for the next 1,000 years.

play18:01

It's just very remarkable.

play18:04

And we don't know anything about that

play18:07

because there are no manuscripts.

play18:10

Anybody-- there's that wonderful play by--

play18:16

who's the Italian?

play18:25

What?

play18:26

AUDIENCE: Dante?

play18:27

MARVIN MINSKY: No, no, a recent one.

play18:29

AUDIENCE: [INAUDIBLE].

play18:35

MARVIN MINSKY: No, he's sort of contemporary--

play18:39

oh well.

play18:40

Anyway, he has a play about searching for the lost--

play18:47

there's some record that Aristotle had a book of jokes,

play18:51

or rather a book-- he has books on ethics and things like that,

play18:57

and there's a book about humor which is lost.

play19:00

And most scholars think it's not important,

play19:03

because if you look at the 10 existing books on Aristotle--

play19:10

I think there's about 10--

play19:13

allegedly by-- and there are students' notes.

play19:15

And almost every subject appears in at least two of them anyway.

play19:22

So one conjecture is that there really isn't any--

play19:25

very much lost from ancient times.

play19:33

Anyway, if you ever read books, you might as well read one

play19:43

or two of Aristotle's.

play19:44

Because it's-- the translations I'm told are pretty good,

play19:50

and you can actually get ideas from it.

play19:56

Yes?

play19:57

AUDIENCE: I don't know if you ever heard about [INAUDIBLE]..

play20:03

MARVIN MINSKY: Umberto Eco is the writer.

play20:06

[LAUGHTER]

play20:07

Sorry.

play20:08

How does memory work?

play20:10

Something-- something about your expression.

play20:13

Sorry.

play20:15

AUDIENCE: [INAUDIBLE] he tries to explain consciousness.

play20:18

But you say that consciousness is [INAUDIBLE] work.

play20:23

But I don't quite agree with his definition.

play20:27

But basically his definition is that the more

play20:31

the information is integrated in more portions, being is.

play20:40

MARVIN MINSKY: The more information you have?

play20:42

AUDIENCE: The more integrated the information is.

play20:45

So for example, I don't know, he used the example of a MacBook

play20:53

that has a lot of information that's not integrated.

play20:56

Like, it is not correlated, and so it's not very conscious.

play21:01

MARVIN MINSKY: That sounds like an important idea

play21:03

and there ought to be a name for it.

play21:06

AUDIENCE: Yeah, he had something.

play21:08

But I think [INAUDIBLE] And this guys is, like, a neuroscientist

play21:12

and psychologist.

play21:15

And like you see some edge cases of people

play21:19

that split their brain in half.

play21:22

And it seems that both halves are kind of conscious.

play21:27

But I [INAUDIBLE] because that people, they

play21:31

still have information that's integrated.

play21:34

But it seems that they are not conscious.

play21:36

So there must be some action into that information,

play21:40

even if it's passive or active.

play21:42

But it seems very interesting.

play21:45

MARVIN MINSKY: Well which of my 30

play21:48

features that go into that suitcase do they have?

play21:55

It doesn't make any sense to say something is conscious or not,

play21:58

does it?

play22:00

You just said it yourself, that there's

play22:07

some degree of integration perhaps.

play22:13

But can you say what you mean by integration?

play22:17

You probably need to say 20 things and many of them

play22:21

might be independent.

play22:27

Here's an example of something.

play22:31

Many years ago, people in the 1950s and '60s,

play22:41

it was very popular to talk about the left and right brain.

play22:47

Have you heard people say-- what's

play22:49

the difference between the left brain and the right brain?

play22:53

AUDIENCE: Rational--

play22:55

MARVIN MINSKY: Rational versus emotional?

play22:59

Now I haven't heard anybody discuss that

play23:02

for the last 15 or 20 years.

play23:04

AUDIENCE: Although it seems to have

play23:07

become really enmeshed in popular culture now.

play23:09

If you asked anybody what they know about the brain, what

play23:11

the person will say is, well, I'm

play23:13

more of a right-brained person or a left-brained person.

play23:15

That seems to be a sticking point.

play23:17

MARVIN MINSKY: They used to, but I haven't heard

play23:20

that for at least 15 years.

play23:23

I have not heard a single person, psychologist,

play23:26

mention it.

play23:27

Have you?

play23:28

AUDIENCE: I think fMRI has all but obsoleted that theory.

play23:34

AUDIENCE: There's one thing it's good for.

play23:36

It's disproving that.

play23:38

MARVIN MINSKY: Anyway, I mention--

play23:39

in The Society of Mind, I think, I had a grumble about it.

play23:43

Which is that, as far as I can tell,

play23:48

it appears to be true that language

play23:53

is located in most people in two very

play23:56

definite areas in the left brain but occasionally,

play24:01

in the right brain of some people.

play24:03

But other than that, as far as I can see,

play24:07

when you actually catalogue the differences

play24:09

that the psychologists reported in the 1960s and '70s, then

play24:18

the things in the left brain were largely

play24:22

adult kinds of thinking, and the things in the right brain

play24:26

were largely childish.

play24:29

Not-- it wasn't that they were rational or not,

play24:32

it was that they weren't very hierarchical and tower like.

play24:39

And I think there was a nice romantic idea

play24:44

of contrasting emotions and intellect and all

play24:50

those dumbbell distinctions and projecting them onto the brain.

play24:57

But I don't know how I--

play24:59

what started me on that track.

play25:01

But it's interesting that it was very, very popular

play25:04

and psychologists talked about it all the time

play25:07

when I was a student.

play25:09

And I haven't seen it mentioned by any cognitive psychologist

play25:14

for--

play25:19

yeah?

play25:21

AUDIENCE: So he mentioned this theory, but we don't--

play25:26

I believe we don't test our theory with edge cases.

play25:29

So like mental [INAUDIBLE] people or people that--

play25:34

probably there are a lot of people that--

play25:37

not a lot, but some percentage of people that are mentally ill

play25:40

or don't have--

play25:42

form so well in some part of the brain.

play25:45

And maybe we can have some idea of like what consciousness

play25:52

is, just by seeing people that don't

play25:56

have some part of the brain that might interfere with something.

play26:00

I don't know.

play26:02

Like this big brain may give a reason why--

play26:08

what consciousness is.

play26:09

Because maybe some half a brain [INAUDIBLE] consciousness.

play26:15

MARVIN MINSKY: But I don't understand what you're--

play26:17

you're trying to-- you're trying to construct a meaning

play26:22

for the word "consciousness."

play26:25

AUDIENCE: Well, Tony is definitely onto something

play26:29

interesting.

play26:30

And I think the reason that he uses the word "consciousness"

play26:33

is that it's in the sense that people talk

play26:35

about losing or regaining it.

play26:38

And so he can actually experimentally test

play26:42

this theory--

play26:43

people who are asleep, or in a coma,

play26:46

or dreaming, or locked in, or is just in a vegetative state.

play26:50

[INAUDIBLE] this theory actually agrees

play26:53

with sort of a common-sense idea of whether this person is

play26:56

conscious in a temporary way.

play26:59

MARVIN MINSKY: But then is that different from--

play27:03

if you used the word "thinking" instead, you

play27:05

could say when somebody is in a coma, they're not thinking.

play27:09

AUDIENCE: I don't think that it's good for him

play27:11

to use the word "consciousness."

play27:13

I think that the word "consciousness,"

play27:14

to many people, refers to a lot of things

play27:17

that his theory does not treat at all.

play27:19

MARVIN MINSKY: See, it's really dangerous if you--

play27:24

is it Pinker who likes--

play27:26

I forget.

play27:27

AUDIENCE: Yeah.

play27:28

MARVIN MINSKY: It's dangerous to feel sure

play27:31

that there is something very important

play27:36

and a central mystery and--

play27:38

what does he call it?

play27:39

The hard problem of psychology.

play27:43

And so here is really a very smart guy, Steven Pinker.

play27:49

And as far as I can see, he does nothing

play27:51

but harm to the people he talks to, because he

play27:55

gets them to do bad experiments and waste their time.

play28:00

So instead of trying to revive consciousness,

play28:05

it's worth considering that might be a very bad thing

play28:08

to do to yourself and other people.

play28:12

What problem are you trying to solve?

play28:14

Is there any way--

play28:15

or the problem of qualia, for example.

play28:18

Because the standard view--

play28:23

and this is something that still is a serious disease even today

play28:29

in philosophy.

play28:31

That is, the idea that the redness of red things

play28:36

is a very fundamental thing.

play28:40

It's indivisible.

play28:41

It's not describable.

play28:43

It's like-- to those philosophers,

play28:47

that's just as important as when--

play28:53

who was the Greek--

play28:54

Democritus, was it?

play28:55

Who discovered atoms?

play28:58

The idea of atoms was an enormous breakthrough.

play29:02

Of course, it took 2,000 years before people

play29:08

realized that, yes, there are atoms and they're not.

play29:13

They're actually complicated systems

play29:15

made of quarks and 5 or 10 other things.

play29:20

So now we don't have atoms anymore.

play29:25

But I think Pinker has the idea that red is irreducible.

play29:31

And you can't describe it.

play29:32

It's like the atom of thought.

play29:35

And these qualia are the fundamental problem

play29:38

of psychology.

play29:40

To me, it's exactly the opposite.

play29:42

Why do we have a word for it?

play29:46

When I say red, do you experience the same thing

play29:51

as anyone else who says red?

play29:52

And it seems to me that somebody who

play29:55

got sick after eating a tomato has a different qualia for red

play30:02

and, you know, blood, violent things, bad.

play30:09

Maybe another child has all sorts of pleasant associations

play30:14

with things that are red.

play30:16

And the concept of red is--

play30:19

it's not that it's inexpressible because it's indivisible.

play30:24

It's inexpressible because it's connected with thousands

play30:27

of other ideas and experiences.

play30:31

And therefore, there's no way to make

play30:34

a compact definition of it.

play30:36

But it's exactly the opposite.

play30:38

It's not the hard problem of psychology.

play30:45

It's not a problem--

play30:46

it's something that will fall out automatically

play30:49

without any effort when you have a pretty

play30:52

good theory of psychology.

play30:55

AUDIENCE: But why do we have these qualia [INAUDIBLE] Why?

play31:00

MARVIN MINSKY: Why do we have descriptions of things?

play31:03

Because the animals that don't have

play31:05

compact descriptions of things get eaten very quickly,

play31:09

because they can't recognize things that might hurt them.

play31:14

It's very important to have machinery

play31:15

for recognizing real things.

play31:19

And real things have features.

play31:22

In fact, there is such a thing as redness--

play31:26

namely, the frequencies of light of what?

play31:30

Around 400 nanometers?

play31:34

What's the frequency?

play31:36

What?

play31:37

AUDIENCE: 700 nanometers?

play31:38

MARVIN MINSKY: That far?

play31:39

That's infrared, isn't it?

play31:41

AUDIENCE: A little bit.

play31:42

650, 680.

play31:43

MARVIN MINSKY: Anyway.

play31:46

One of the things somebody pointed out to me in later life

play31:50

is that there's only one yellow.

play31:53

There are a lot of shades of red but interesting

play31:58

how tiny the yellow spectrum is.

play32:01

I don't know what it means.

play32:07

If you look around a room there--

play32:11

I don't see a single one.

play32:15

AUDIENCE: It might be a lion.

play32:16

MARVIN MINSKY: What?

play32:17

AUDIENCE: It might be a lion.

play32:19

MARVIN MINSKY: A lion, yes.

play32:21

Does anybody see anything yellow in here?

play32:23

AUDIENCE: [INAUDIBLE] the consistency of yellow light

play32:26

and can you do it?

play32:28

[INAUDIBLE]

play32:32

MARVIN MINSKY: Yes, what element has a bright yellow line?

play32:38

AUDIENCE: Sodium.

play32:39

MARVIN MINSKY: Sodium.

play32:40

It's, yeah, orange-ish.

play32:42

AUDIENCE: It's orange.

play32:44

Yellow as the sun.

play32:46

MARVIN MINSKY: Yes.

play32:47

Maybe that's very important.

play32:58

It's in the bin.

play33:00

That's great.

play33:06

AUDIENCE: So this color is called warm white.

play33:09

[LAUGHTER]

play33:13

MARVIN MINSKY: In the story, yeah.

play33:14

AUDIENCE: It has a qualia [INAUDIBLE]..

play33:16

[LAUGHTER]

play33:18

MARVIN MINSKY: Warm white.

play33:19

AUDIENCE: Warm white.

play33:21

MARVIN MINSKY: What is it in Finland?

play33:24

AUDIENCE: I don't-- it's called--

play33:27

the light like that [INAUDIBLE] comes from the tungsten--

play33:31

the [INAUDIBLE] tungsten light bulbs [INAUDIBLE]

play33:39

MARVIN MINSKY: Yes, that's right.

play33:41

I've stocked up on 20 watt tungsten bulbs.

play33:45

Because my house is full of fluorescent bulbs

play33:48

that are remote controlled by things.

play33:52

And if there's no incandescent bulb in one of the sockets,

play33:55

then the remote controller breaks.

play33:59

These are the things you buy with, what are they called?

play34:09

Little units that--

play34:11

AUDIENCE: X10?

play34:12

MARVIN MINSKY: X10, right.

play34:14

The old X10 units, the receivers burn out

play34:19

if there's no resistive load on them.

play34:23

So I have to have enough incandescent bulbs

play34:29

for the next 20 years or get rid of the X10s.

play34:38

I think they're illegal in Japan or have--

play34:44

they're still there?

play34:45

AUDIENCE: Yeah.

play34:46

You can still find it in some shops,

play34:48

and people buy them so that [INAUDIBLE]

play34:56

MARVIN MINSKY: I bought a lot of LED light bulbs

play34:59

at the swap fast the other day.

play35:05

Back to AI.

play35:09

AUDIENCE: So the reading, you seem

play35:11

to imply that evolution is the best strategy for creating AI.

play35:15

Because, one, it'll take a lot of time.

play35:17

And two, because you'll get stuck a [INAUDIBLE]..

play35:21

But if we had infinite time and enough mutation,

play35:25

do you think it'd be possible to create

play35:27

a good artificial intelligence using evolution?

play35:30

MARVIN MINSKY: Well, if there's somebody in charge.

play35:35

If you have evolution like on a big planet,

play35:42

then you get a lot of lifeforms.

play35:45

And so the problem is that you might

play35:50

have some really stupid life form that eats the smart ones.

play35:57

But I have a more serious objection to evolution.

play36:02

You see, there have been several projects in the last--

play36:07

well, since computer science started--

play36:10

of trying to make problem solvers

play36:15

smart by imitating evolution, which

play36:20

is variation and selection.

play36:22

So I know of about five or six such projects which were

play36:28

fairly well funded and serious.

play36:33

What's most interesting maybe was

play36:36

the one of Doug Lenat, which was just him by himself.

play36:42

So if you look up Douglas Lenat's thesis,

play36:48

which was called--

play36:51

I forgot the name.

play36:52

AUDIENCE: AM?

play36:53

MARVIN MINSKY: AmM, Automated Mathematician.

play36:56

And a second publication called Eurisko E-U-R-I-S-K-O.

play37:02

Those were projects in which he did variation and selection.

play37:08

And he imitated chromosomes by having

play37:11

strings of simple operations which were usually

play37:17

things like adding and subtracting

play37:18

and conditional jump and so forth.

play37:23

But there are several bugs with organic evolution.

play37:26

And the most serious one, which is that evolution

play37:33

doesn't remember what killed the losers.

play37:37

So there's no record in the genes of the mutations

play37:43

which were lethal.

play37:45

And in fact, it's almost the opposite.

play37:50

I'm told that in the human genome--

play37:55

I believe, is it still 90% doesn't do anything?

play38:00

Some large fraction?

play38:01

AUDIENCE: Someone who [INAUDIBLE] do something,

play38:04

actually.

play38:05

MARVIN MINSKY: Well, they once did presumably.

play38:07

About 90% of the human genome and a lot of other animals

play38:12

is not transcribed into proteins.

play38:15

And a fair amount of it is old inactive viruses.

play38:21

So it has, you know, maybe 90% of some really deadly virus

play38:27

that got incorporated into the genome and gets copied.

play38:31

So the big bug in evolution, to me,

play38:33

is that if you're going to build a system that's

play38:38

going to try to develop a new kind of program

play38:43

by trial and error, the standard approach is to imitate Darwin.

play38:50

And you mutate these programs, you give them a test,

play38:56

and you then copy the programs that pass the test

play39:01

and repeat the cycle.

play39:03

So what happens is you collect--

play39:05

because you're mutating them as you go along,

play39:08

you're collecting genes that help solve problems.

play39:12

But you're not collecting information

play39:14

about genes that make the animal worse

play39:17

or make it fail to solve problems.

play39:19

So this is true of all of evolution, as far as I can see,

play39:23

that there's no record kept of the worst

play39:26

things that can happen.

play39:28

And so every lethal mutation eventually kills someone.

play39:40

A lethal mutation is one--

play39:45

you know, you have two copies of every gene, one from a mother

play39:49

and one from a father.

play39:51

And if you get two copies of the same gene--

play40:00

and most genes have--

play40:04

a lot of genes are recessive in the sense

play40:07

that, unless you get two of them, they're not expressed.

play40:10

If you have a lethal recessive gene,

play40:13

that usually means that you can have one of that gene

play40:16

and you're not sick.

play40:17

But if you have two of them, it eventually kills you.

play40:21

And it might kill you before birth,

play40:24

so you don't even get an embryo.

play40:26

Or it might kill you when you're 40 years old,

play40:30

as in that horrible Huntington's disease, where you

play40:34

can carry one and not suffer.

play40:37

But if you get two, it kills you in middle age which

play40:40

is very expensive for society.

play40:43

Anyway, there's no record.

play40:47

What you want to do is, for each problem solver

play40:53

that doesn't work, you want your evolution program

play40:59

to see why it doesn't work and not make that kind of gene

play41:03

again or whatever was responsible for it.

play41:06

So that's a big bug in Darwinian evolution.

play41:09

And the interest in fact is that every lethal recessive

play41:11

gene will eventually, on the average, kill someone.

play41:17

This is not a well-known.

play41:19

You see the arithmetic?

play41:21

Because it has to wait till there are two of them, and then

play41:25

it kills that person.

play41:27

And if you calculate the probabilities

play41:30

that there's a half chance of getting each of them

play41:33

in each generation, the math shows that eventually there's

play41:40

one premature death for each recessive gene.

play41:45

It's kind of funny.

play41:46

So it would be nice if we had some way to clean them

play41:51

up once and for all.

play41:53

And then everybody would be a lot healthier.

play41:56

I bet, within the next 20 or 30 years,

play42:01

we'll see some project which is to get rid of--

play42:04

just take somebody's genome, sweep out

play42:08

all the lethal recessives, and get rid

play42:12

of 100 diseases or more.

play42:14

And suddenly, everybody will live

play42:17

to be 150 years instead of 100.

play42:21

Something like that ought to happen.

play42:23

AUDIENCE: There's a theory as to why

play42:26

recessive genes stay in the population

play42:29

despite killing off people.

play42:31

And there are some genes for which

play42:32

it seems to be the case that, you know,

play42:35

when you get two recessive genes, you die.

play42:38

But having the heterozygous population

play42:40

gives you some benefit by giving benefit

play42:43

against a different disease.

play42:44

And that's why it exists.

play42:46

So just getting rid of all the recessive lethal genes

play42:49

might cause problems.

play42:51

MARVIN MINSKY: Wow, I hadn't thought of that.

play42:53

Are there some examples?

play42:55

AUDIENCE: Oh, yeah.

play42:56

Malaria.

play42:57

AUDIENCE: Yeah, sickle cell anemia.

play42:59

Malaria, so if you have--

play43:01

you have sickle cell, you cannot get malaria.

play43:04

AUDIENCE: If you're heterozygous for the sickle cell disease,

play43:07

[INAUDIBLE]

play43:07

MARVIN MINSKY: But that's not very beneficial,

play43:09

because you usually die when you're around 40.

play43:12

AUDIENCE: No, no, no.

play43:13

If you're heterozygous for sickle cell.

play43:14

MARVIN MINSKY: Oh.

play43:15

AUDIENCE: Then you don't have sickle cell disease,

play43:17

but you have benefits against malaria.

play43:19

MARVIN MINSKY: Oh, I didn't know that.

play43:21

AUDIENCE: The best example commonly given

play43:23

in all biology classes.

play43:25

But I'm sure there must be other examples.

play43:27

MARVIN MINSKY: I never took a biology--

play43:34

that's good.

play43:38

So we could probably find one that--

play43:41

we just have to tailor it a little bit.

play43:44

Yeah, so the mosquitoes don't like it?

play43:48

Is that what it is?

play43:51

AUDIENCE: It's just bad enough blood

play43:52

that the mosquitoes will ignore you,

play43:54

but not bad enough that you die.

play43:57

MARVIN MINSKY: Does it keep the mosquito from biting you?

play43:59

Or does it make the mosquito sick or what?

play44:04

AUDIENCE: [INAUDIBLE].

play44:05

MARVIN MINSKY: It's just in-- yeah.

play44:06

AUDIENCE: Yeah.

play44:08

Some stuff I've read about viruses,

play44:10

you have people changing their theory about viruses.

play44:13

And one thing that could maybe-- in some sense,

play44:16

we're symbiotic with viruses, in some sense [INAUDIBLE]..

play44:20

But like you say, the jump comes at the genome.

play44:24

It may be process that takes advantage of that.

play44:27

So one thought is maybe the viruses

play44:28

are the things that [INAUDIBLE] the losers,

play44:31

remember why losers lost.

play44:34

MARVIN MINSKY: That's a good point.

play44:36

There are lots of things we don't know and wrongly believe.

play44:45

With this synthetic life, there are

play44:49

two groups starting to make--

play44:52

maybe more.

play44:52

There are probably some secret groups

play44:54

trying to make them, too.

play44:57

AUDIENCE: Also in some sense, the bacteria

play45:02

that live in the human body weigh

play45:04

far more than the cells that are really yours

play45:07

and so forth and so on.

play45:09

You know, they're starting to think

play45:10

that the entire genome [INAUDIBLE] bacteria colonize

play45:14

you are also part of that equation in some way.

play45:16

So, you know, it could be that some of the genetic information

play45:20

in evolution is not kept in your own genome

play45:23

but are kept in all the organisms that are--

play45:25

that live in the human [INAUDIBLE]..

play45:28

MARVIN MINSKY: Yeah, it's--

play45:31

AUDIENCE: Is there [INAUDIBLE]?

play45:32

AUDIENCE: Yes, there is.

play45:33

That somebody is trying to sequence the--

play45:35

MARVIN MINSKY: Bacteria [INAUDIBLE]??

play45:37

AUDIENCE: Yeah, [INAUDIBLE].

play45:40

Do you know what that's called?

play45:42

MARVIN MINSKY: How many do you think--

play45:43

AUDIENCE: He's trying to sequence

play45:45

every genome of everything that lives in your gut.

play45:48

MARVIN MINSKY: Yeah, how many--

play45:50

I understand there are more bacterial cells

play45:52

than somatic cells by a factor of 100 or something.

play45:57

Because bacteria are so small.

play45:59

But how many different bacteria infest a person?

play46:03

Is it hundreds or tens or thousands?

play46:06

AUDIENCE: I guess that's what we're trying to find out.

play46:14

MARVIN MINSKY: Yeah.

play46:16

AUDIENCE: So when you say like, in evolution, it

play46:19

would be nice if we had everything that went bad--

play46:24

and then you said-- and then we could

play46:27

see what went wrong, right?

play46:29

But isn't it that what we're doing evolution [INAUDIBLE]

play46:35

we don't have a clear idea of how someone doesn't

play46:39

have to solve the problem?

play46:40

So even though we have the information of the solver,

play46:46

that they don't work.

play46:47

Like we-- I feel like if we had a way to know what went wrong,

play46:54

then we would already have information enough

play46:57

to know what is right, you know?

play47:00

MARVIN MINSKY: Oh, yes.

play47:01

AUDIENCE: So how do you decide [INAUDIBLE]??

play47:04

MARVIN MINSKY: I was thinking of a fairly high level system.

play47:07

Because when Lenat or Larry Fogel--

play47:12

was another one of these learning by evolution systems.

play47:22

I'm not suggesting that we could make a simple evolution

play47:29

simulation that would think of reasons why it failed.

play47:34

So this would be a high level one,

play47:35

if you're writing a big AI program.

play47:38

For example, when you learn arithmetic, after awhile,

play47:44

you learn not to divide by 0.

play47:48

So what do we call negative knowledge?

play47:56

What are the commonsense things?

play47:58

Is there a name for the things you should never do?

play48:03

AUDIENCE: Well, when people talk about--

play48:05

you know, they-- search tree as a possibility,

play48:07

you prune the trees.

play48:09

MARVIN MINSKY: You prune the tree.

play48:10

But, you know, we have rule based systems.

play48:13

And they got very popular around 1980

play48:16

and wiped out most of symbolic AI for a long time.

play48:24

But there aren't any rules that say, don't do x.

play48:29

Are they ever?

play48:30

Do they have some?

play48:32

AUDIENCE: Some experts [INAUDIBLE]

play48:35

MARVIN MINSKY: So the question is, when are they invoked?

play48:37

In a certain situation, turn off this bank of rules, maybe.

play48:47

So I'm not suggesting that you can make

play48:49

a very simple system do that.

play48:51

Because, in fact, figuring out why this mutation was bad

play48:57

might be a very hard problem.

play48:59

But as you build smarter and smarter ones,

play49:02

then you want to put--

play49:03

well, what I called critics.

play49:06

Or I don't know.

play49:07

Freud had a name for them.

play49:11

At some point, you want to have prohibited actions

play49:15

and in Sigmund Freud's early model of psychology,

play49:22

there was a place for things that you

play49:25

would go away from or not do and these sensors, he called them.

play49:33

And they never appeared in the main line of psychology.

play49:42

When they threw out Freud, who had a few bad ideas,

play49:46

they threw out all these good ideas, practically.

play49:50

AUDIENCE: You might be pleased to hear that some of the monkey

play49:52

neuroscientists are starting to find some [? critics. ?]

play49:57

It's pretty handwave-y stuff as of now.

play50:00

But at least they're thinking about it.

play50:02

There's certain tasks where the monkey is cued to pay attention

play50:07

to one thing or another.

play50:09

Usually, if any, it's color versus orientation.

play50:12

And when they found is that orientation has dominance.

play50:15

And so when a cue is telling the monkey

play50:17

that they have to ignore the orientation

play50:20

and pay attention to color, the part--

play50:24

those neurons which are responsible for looking

play50:26

at the orientation are being actively inhibited

play50:30

by another group of neurons, which they're now

play50:33

calling a [? critic. ?]

play50:34

MARVIN MINSKY: Are these in the same--

play50:37

or is it a little nearby nucleus that's--

play50:40

AUDIENCE: Nearly nucleus.

play50:41

MARVIN MINSKY: That's nice.

play50:42

So that would be a good place for--

play50:49

is there a word for negative knowledge?

play50:53

AUDIENCE: They call negative knowledge, I guess.

play50:56

MARVIN MINSKY: It would have too many different senses.

play51:01

Advice not to take.

play51:02

There's some--

play51:08

AUDIENCE: So this question would imply that there

play51:10

is a metric for intelligence.

play51:12

But is there a limit to intelligence?

play51:16

As in, is it possible to say one day

play51:18

we have artifical intelligence that is the most

play51:21

intelligent possible thing?

play51:24

MARVIN MINSKY: Seems unlikely, because presumably the survival

play51:33

value of a particular system depends

play51:36

on the world the thing is in.

play51:39

It might be that for all really--

play51:44

for all worlds above a certain complexity,

play51:48

maybe there are some overall strategies

play51:52

that are universally better than others or something.

play51:56

But measuring intelligence doesn't make any sense.

play52:01

Because you'd-- I think you have to go the way Howard Gardner

play52:07

did and say, well, there's social intelligence and--

play52:14

I don't know.

play52:15

Can anybody rattle off his list?

play52:20

What are his eight ways of thinking?

play52:30

Just look up Howard Gardner.

play52:36

So the amount of intelligence is--

play52:39

clearly, it's a useful, intuitive idea

play52:41

that for any particular machine you

play52:45

could imagine another one that can do everything

play52:49

that one can do and more.

play52:51

But you're going to get a lattice, not an ordered thing.

play52:56

And the lattice won't--

play52:58

at some point, it will start getting inconsistent.

play53:01

And this will be better than that one for this and not that.

play53:05

And--

play53:10

AUDIENCE: Gardner had about nine different types

play53:13

of intelligences, according to his Wikipedia article, logical,

play53:18

mathematical, spatial, linguistic, bodily,

play53:21

kinesthetic, musical, interpersonal, intrapersonal,

play53:24

naturalistic, and existential.

play53:26

MARVIN MINSKY: There you go.

play53:28

And if you take any one of those--

play53:32

when I was a mathematician, I was really good

play53:35

at topology but not at algebra.

play53:38

And at some point, that stopped me

play53:42

from being even better at topology.

play53:46

So if you take any one of those--

play53:50

I think Howard wants to keep it simple,

play53:53

but I wonder if he has a sub psychologist who has chopped up

play54:00

mathematics into the right--

play54:02

what are the right eight?

play54:04

[INAUDIBLE]?

play54:08

How many of you are bad at some kind

play54:10

of mathematics and know why?

play54:18

AUDIENCE: I'm really bad at Fourier series,

play54:20

just because I don't like them.

play54:22

[LAUGHTER]

play54:25

MARVIN MINSKY: I wonder what Newton

play54:27

would have thought about them.

play54:37

In my PhD thesis, I had a--

play54:43

it was mostly about neural networks.

play54:45

And there were some people who thought

play54:47

that you could put information-- if you had a bunch of neurons

play54:50

in a circle, then you could put in a string of signals

play54:57

of different durations and store the bits

play55:03

in this circular thing.

play55:06

Because in World War II, there were no digital computer

play55:11

memories.

play55:13

But there were some computer-like things

play55:15

that stored signals in a tube of mercury

play55:19

with a speaker and a microphone, and it

play55:24

was possible to store a lot of information

play55:27

in sort of analog bits for a long time.

play55:32

But what you do is you have something

play55:33

that would regenerate them and synchronize them with the clock

play55:37

each time around.

play55:39

And I was trying to prove a theorem

play55:42

that, given what we know about the delay in neurons,

play55:48

if you stimulate a neuron very strongly,

play55:51

it reacts more quickly than if you just stimulate

play55:55

a little bit above threshold.

play55:57

Then it takes a longer time to fire.

play55:59

So I was trying to prove that in neural networks,

play56:02

in something like a human brain, you

play56:06

couldn't store a lot of information in circular loops.

play56:10

And I kept having trouble proving that.

play56:14

And I ran into John Nash who was another student

play56:20

a bit ahead of me.

play56:22

And he listened to me for a minute

play56:25

and he said, expanded in Fourier series.

play56:30

And after about two days, I figured out

play56:33

what he probably meant.

play56:35

And I proved this nice theorem, and it turned out it also--

play56:46

and it had been discovered a long time

play56:48

ago it was called a Lipschitz condition.

play56:51

And if you have a certain condition like this,

play56:55

then the information will go away.

play56:58

But if you don't, you can keep the information around

play57:03

for a very long time.

play57:07

So in this case, the proof showed

play57:11

that you couldn't store--

play57:13

unless you had a renormalizer or a clock somewhere,

play57:17

you couldn't store circular information

play57:20

in a mammalian brain very well.

play57:26

It's a nice example of something where

play57:30

one person had a different way of looking at it.

play57:33

Nash was pretty famous for his results in game theory,

play57:40

but I suspect he might have been responsible

play57:42

for 5 or 10 other things that he--

play57:50

Norbert Wiener had this habit of talking to a student.

play57:54

He says, what are you working on?

play57:56

And the student would explain it.

play57:59

And Wiener said, oh, well you just do this.

play58:03

And I was present at a meeting of the--

play58:06

I was in the math department where

play58:10

they had a meeting about who would tell Wiener not

play58:12

to do that anymore.

play58:13

[LAUGHTER]

play58:22

Some student had-- oh well, it's a true story.

play58:35

I wonder what else I've forgotten.

play58:40

Yes?

play58:41

AUDIENCE: I'm curious.

play58:42

You say this could be updated with a clock.

play58:44

Is there any evidence to suggest that biologically one could

play58:48

or could not construct a clock?

play58:51

MARVIN MINSKY: There are lots of clocks.

play58:54

I suspect that if I had thought about it more I would have--

play59:00

because I'm talking the middle 1950s, and people

play59:05

knew a lot about brainwaves.

play59:07

And, you know, there are three or four

play59:10

fairly large synchronous activities in the brain.

play59:17

And I don't think anybody knows much about what they're for.

play59:20

Do you know?

play59:20

Have you heard any rumors?

play59:22

What is the delta wave for?

play59:24

AUDIENCE: Well, actually, the monkey experiment

play59:27

I was just talking about relies on the assumption

play59:31

that the beta wave is for suppression

play59:36

and the alpha wave is for activation.

play59:39

And I think people are still sort

play59:41

of debating about the delta and theta waves.

play59:43

MARVIN MINSKY: Mhm.

play59:45

The alpha wave-- what's the 10% in--

play59:48

I think that's the big one.

play59:50

And it goes away when you are thinking hard.

play59:55

That is, if you're not focusing much on anything,

play60:00

then it's a fairly nice regular 10 per second.

play60:04

And if anything gets your attention and you focus on it,

play60:09

then the alpha wave pretty much gets noisy and disappears.

play60:14

I think.

play60:15

I don't know what the other what the others do.

play60:21

Is that correlated with any event?

play60:25

AUDIENCE: Obviously, the usual room shutting down.

play60:30

MARVIN MINSKY: I brought all this,

play60:31

but I decided not to use it anyway.

play60:35

AUDIENCE: I think it's correlated

play60:36

with a certain period of time after the signal

play60:39

from the computer stops changing.

play60:42

MARVIN MINSKY: Oh.

play60:43

You mean it might wake up again?

play60:45

AUDIENCE: No.

play60:45

It shuts down at the same time every class.

play60:47

AUDIENCE: It's not always the same time.

play60:49

MARVIN MINSKY: It's usually at 8:30.

play60:57

AUDIENCE: And he stopped using the [INAUDIBLE]..

play61:00

Correlation implies causation.

play61:05

MARVIN MINSKY: I wonder if Steve Jobs had--

play61:08

this little thing has two batteries.

play61:11

And at one end, there's a dot.

play61:12

And the other end, there's a slot

play61:16

which is for a screwdriver.

play61:18

But it's also the minus sign of the battery.

play61:25

It could have been plus, but--

play61:28

but-- what's that?

play61:36

AUDIENCE: It's probably wired so you can put a coin in.

play61:38

MARVIN MINSKY: Any coin, actually.

play61:40

AUDIENCE: Yeah, so you don't actually need a screwdriver.

play61:43

MARVIN MINSKY: I don't have a coin.

play61:44

[LAUGHTER]

play61:46

AUDIENCE: But you do have a screwdriver, right?

play61:50

MARVIN MINSKY: Of course.

play61:54

It's usually one.

play61:59

It's somewhere.

play62:04

No tips.

play62:05

[LAUGHTER]

play62:14

Good question.

play62:14

Yeah?

play62:15

AUDIENCE: Do you think artificial intelligence

play62:17

will ever be elected as a leader of a government?

play62:24

MARVIN MINSKY: In most science fiction stories,

play62:26

it doesn't give us a choice.

play62:28

[LAUGHTER]

play62:33

The Moon Is a Harsh Mistress.

play62:37

That was Robert Heinlein, wasn't it?

play62:40

It had a really smart computer emerge

play62:44

from the internet on the moon.

play62:48

Yeah?

play62:50

AUDIENCE: Yes.

play62:52

I was curious whether you had the idas as to how we attempt

play62:55

to determine the representations of information

play62:59

that either people or animals use to solve problems.

play63:03

Clearly this is a critical problem with intelligence.

play63:05

And lots of AIs got into various ways

play63:08

of representing information.

play63:10

But it would be really interesting to see

play63:14

how is that measuring--

play63:17

has ideas of how that could be tested.

play63:20

MARVIN MINSKY: That's wonderful.

play63:25

What are the cognitive psychologists

play63:30

doing about representations?

play63:32

Have you run across any?

play63:36

AUDIENCE: They studied reaction times, [INAUDIBLE]

play63:40

way [INAUDIBLE].

play63:45

They don't have very good ways of setting [INAUDIBLE]

play63:49

MARVIN MINSKY: Yeah.

play63:53

Rule-based systems are still the--

play63:57

I haven't read a modern cognitive

play63:59

psychology-- has anybody read a modern cognitive psychology

play64:03

book?

play64:05

Do they have trance frames or scripts?

play64:12

What's happening in that realm?

play64:33

Try to remember what--

play64:43

I guess I've never seen any Winston-like diagrams

play64:46

in anything but AI.

play64:49

But there must be some somewhere.

play64:53

It's 1970.

play65:00

Who has taken a psychology course?

play65:05

Is that true?

play65:08

What's in it?

play65:10

AUDIENCE: They talk about babies a lot nowadays.

play65:12

[LAUGHTER]

play65:14

MARVIN MINSKY: Well, there's a little industry of trying

play65:16

to show that Piaget was wrong.

play65:19

Is that what they say about babies?

play65:23

When do babies get conservation of quantity or something?

play65:27

AUDIENCE: Yeah, basically just go

play65:29

throughout the whole development stage and explain that.

play65:34

But I have not seen Winston and [INAUDIBLE] predicts.

play65:41

MARVIN MINSKY: Well, there is a problem with the low resolution

play65:48

of brain scanning.

play65:50

So if you can only tell when a square centimeter of brain

play65:58

is more active than another part,

play66:01

then it's hard to imagine how you

play66:02

could look for the representation of an arch

play66:06

as a block on top of two others.

play66:14

But you should be able to make a hypothesis about representation

play66:19

and then design an experiment in which you show

play66:25

a picture of an arch and then quickly show a picture where

play66:29

there's a little space between them,

play66:32

so it's not being supported by--

play66:35

and blink those on and off and see

play66:38

if different kinds of changes in the representation

play66:42

cause different kinds of brain activity.

play66:47

But I suspect that most experiments

play66:53

on watching brain activity are from giving a stimulus and not

play66:58

a pair of quickly changing ones maybe.

play67:02

So you want to find what parts of the brain

play67:06

are activated when a certain kind of difference appears.

play67:10

And it shouldn't be hard to make such experiments,

play67:14

but my impression is that they don't do that so much as, you

play67:20

show a certain face for a couple of seconds,

play67:23

and then you show something else,

play67:24

and you look to see if the activity moves somewhere.

play67:29

But if your resolution is low, maybe you

play67:33

should be putting in stimuli that change,

play67:39

so that you're finding the response to the changes.

play67:43

It's just a--

play67:44

AUDIENCE: One of the problems is that there

play67:46

is a delay with the kind of brainwaves you can get.

play67:50

Like you can get more real-time reactions, like fMRI.

play67:55

MARVIN MINSKY: Yeah, it usually takes several seconds

play67:57

to get anything.

play68:00

You have to do--

play68:01

AUDIENCE: [INAUDIBLE]

play68:03

MARVIN MINSKY: You'd have to repeat it many times,

play68:05

and I think it still takes several seconds to get

play68:09

any information, doesn't it?

play68:12

What's the-- the first brainwave experiments were in the late--

play68:23

in the 1940s.

play68:24

And that Englishman Grey Walter, who also made that first robot

play68:31

turtle and things like that.

play68:36

I was just reading some of the--

play68:41

some papers he wrote in the middle 1950s.

play68:46

They're not very illuminating about AI,

play68:49

but they show you what some people

play68:54

were thinking in the days before computer science.

play69:00

Yeah?

play69:03

AUDIENCE: When you talk about your book about [INAUDIBLE]

play69:07

and big machines that accumulate huge libraries

play69:11

of statistical data.

play69:13

Use that-- they cannot develop much cleverness because they

play69:19

don't have this--

play69:22

sorry, the-- can't--

play69:26

what's-- because they don't have higher reflective levels.

play69:32

What are these higher reflective levels?

play69:37

MARVIN MINSKY: Well, that's thinking about what

play69:39

you were thinking a minute ago.

play69:43

You know, you think something and then you

play69:45

say, that was a bad idea, why did I get that?

play69:50

Or now I realize I didn't understand something.

play69:54

I've wasted five minutes because--

play69:58

reflective thinking is just thinking

play70:01

about your recent thoughts.

play70:04

Maybe all thinking is--

play70:07

any coherent train of thinking-- each thought

play70:10

is something about the previous thought,

play70:12

but it doesn't have the word "I" in it, you know?

play70:18

You say, why did I waste so much time?

play70:22

Why did I focus on this rather than that?

play70:28

What did that person say?

play70:29

Maybe I missed the point.

play70:36

Maybe most of your thinking is, what did I just think?

play70:39

Maybe I missed the point.

play70:42

Yeah?

play70:44

AUDIENCE: So here we often talk a lot

play70:47

about the economy science in psychology.

play70:48

And I'm curious, how important do

play70:52

you think [INAUDIBLE] science and psychology are

play70:55

to the field of AI and whether the right way of trying

play70:59

to build intelligent machines and understand intelligence

play71:01

is through understanding what we've already seen.

play71:04

Or it's playing around with computers

play71:06

and trying to make systems that solve

play71:08

the problems we want to solve.

play71:09

MARVIN MINSKY: I'm glad you asked that, because I don't

play71:14

think it's very important.

play71:16

Because I think we all--

play71:18

we've got to the point where we know that people solve

play71:21

problems, and we all know how to think about how

play71:26

we solve some problems.

play71:27

We don't know the details of how we did it, but I think--

play71:35

you know, if you look at what's been done in AI,

play71:40

it's more than clear enough where the present system

play71:45

stopped and where they fail.

play71:53

And we keep thinking of ways to fix them,

play71:56

and we get sidetracked.

play71:58

Because that's-- you get some idea and it's too hard

play72:02

to program, and somebody says, use C++ and somebody else says,

play72:09

why did you go back to Lisp and--

play72:13

And I guess my answer is, I don't

play72:18

think we need, desperately, to know more about psychology.

play72:23

Because we already have programs that are pretty good at things

play72:26

and we can see where they get stuck.

play72:29

But it would be nice if there were a community

play72:34

out there helping us.

play72:37

Because the AI groups are all alone,

play72:41

and they don't communicate very well with each other,

play72:46

and they're not very well supported.

play72:51

But I bet as long as we make machines smarter,

play72:58

the psychologists will pay more attention

play73:00

and they'll come back and tell us better things.

play73:03

And eventually, they'll be a real cognitive science.

play73:10

Sort of like physics.

play73:12

Physics got very well with Newton and Galileo and quantum

play73:21

mechanics.

play73:23

But now they have a great community.

play73:26

And when some serious problem comes up, somebody--

play73:32

spend a billion dollars for a new accelerator or something.

play73:37

There's nothing like that in AI.

play73:39

If you say, why did the Newell-Simon general problem

play73:43

solver get stuck on the missionary in cannibals?

play73:49

Somebody used to say, well, here's a billion dollars.

play73:51

I know it's not enough, but maybe you

play73:53

can make it a little smarter.

play73:57

Nobody's offering this.

play74:04

AUDIENCE: [INAUDIBLE].

play74:05

Somewhat related question.

play74:06

So first, since AI is mostly an engineering discipline,

play74:12

it's a question of, how can we make

play74:14

machines to solve these problems with intelligence?

play74:17

Do you think this is going to lead to a better

play74:19

understanding of intelligence?

play74:20

And how important do you think that is to this more I guess,

play74:25

mostly scientific but also slightly philosophical

play74:27

question?

play74:30

MARVIN MINSKY: I think it's just an engineering question.

play74:35

There just isn't a way to get enough bright people

play74:39

to compete with each other to make better AI systems.

play74:45

It's-- anybody have a theory?

play74:50

You see, I'm speaking from the point of view,

play74:53

feeling that there hasn't been much progress in recent years.

play74:58

And maybe I'm wrong and there's a lot of great stuff

play75:01

just ready to be exploited.

play75:03

But I don't see it.

play75:06

AUDIENCE: I think we're kind of in a spinning [INAUDIBLE]

play75:09

of sorts where people are doing a lot of the work in terms of,

play75:15

for instance, tuning the parameters

play75:17

and choosing machinery approximations in order

play75:20

to solve problems that there are incentives out there to solve.

play75:25

And in principle, if we had AI that was good,

play75:29

AI that would do that work instead of programmers

play75:32

having to tune parameters and figure out which algorithms

play75:35

are good for different problems.

play75:37

But as of now, the way the incentives are structured,

play75:41

it's going to take a big energy push to sort of get over

play75:45

the hump of actually creating the infrastructure that's

play75:49

necessary for that stuff to happen automatically.

play75:54

MARVIN MINSKY: Yeah, there are AI groups.

play75:57

There are a few people at Georgia Tech and Carnegie

play76:01

Mellon.

play76:02

Although, my impression is that they're

play76:04

mostly playing robot soccer or something.

play76:11

So a lot of the people who are empowered to do the right thing

play76:17

are--

play76:18

or you look at Stanford.

play76:20

It's wonderful to make these self-driving cars.

play76:23

But I don't think a single thing has been learned from that.

play76:31

Maybe a little has been learned from the Watson thing, but--

play76:41

AUDIENCE: They won't give out their source code.

play76:44

MARVIN MINSKY: Right.

play76:45

And if they did, I think they could

play76:47

read The Society of Mind that says,

play76:51

have a lot of different methods and find some way

play76:54

to integrate them.

play76:55

What's missing in The Society of Mind

play76:57

is better ideas on how to integrate them.

play77:02

And Watson might have some.

play77:04

But on the other hand, it might not.

play77:07

Maybe if it can end up with an answer that's one word,

play77:12

like a person or a sport, then it's done.

play77:17

And so it may be that we know it's at the lower levels.

play77:22

And we don't know what's at the higher levels,

play77:25

and maybe it's no good.

play77:28

On the other hand, maybe there are 10 very important ideas

play77:30

there, and you'd have to read that long paper

play77:35

and try to guess what they were.

play77:38

Do we have a spy in there?

play77:44

Are they telling us something?

play77:46

AUDIENCE: I get little bits and pieces back at the end.

play77:52

I think it is kind of--

play77:53

you know, the good news about that is it

play77:55

has made some progress, and it is kind of a society of models.

play77:59

And they have some supervisory processes

play78:02

to try to figure out which--

play78:04

actually, the most important thing

play78:05

is to try to figure out which methods are good for which

play78:08

kind of questions.

play78:10

MARVIN MINSKY: That would be good.

play78:11

So they might have some good critics

play78:14

and selectors-like things.

play78:16

AUDIENCE: Yeah.

play78:16

So there's some of that, I think, in there.

play78:19

I don't think there are a lot of very brand new techniques,

play78:22

but I think there's probably some of that, yeah.

play78:27

MARVIN MINSKY: They fired their other AI group,

play78:29

but I don't think it was getting very far either.

play78:37

You know the one I mean, the Eric Mueller and--

play78:41

no he moved.

play78:43

AUDIENCE: He worked on Watson.

play78:45

MARVIN MINSKY: No, I mean Doug Riecken, Riecken's group.

play78:49

It was doing more mathematical AI than, I think, heuristic AI.

play78:56

Any other company doing anything?

play78:59

What are the common sense groups in Korea and places like that?

play79:04

AUDIENCE: Well, I'll point it out in December

play79:06

when I go there.

play79:07

MARVIN MINSKY: Henry's going to visit some of them?

play79:12

The mysterious East.

play79:23

Yes?

play79:24

AUDIENCE: So there has been, since a long time ago,

play79:30

from [INAUDIBLE] there has been machines

play79:35

that are trying to build a reflective [INAUDIBLE]..

play79:39

There are critics.

play79:40

And even though the idea died out in the '80s,

play79:44

but then there's still some maching, like maybe Watson,

play79:47

has critics.

play79:48

But the reflective layer, I feel like it

play79:54

does a lot of different things.

play79:58

So what do you think is missing from that layer

play80:01

that no project has [INAUDIBLE]?

play80:13

MARVIN MINSKY: I'm not sure what you're asking.

play80:15

But there is Pat Winston's group working on stories,

play80:23

and my impression is that that's making definite progress.

play80:29

And if he can integrate with Henry Lieberman's kind

play80:37

of large, commonsense knowledge base,

play80:40

maybe something great will happen.

play80:41

But progress is a little bit slow.

play80:46

Gerry Sussman is still full of ideas,

play80:49

but he keeps teaching courses in physics.

play80:56

[LAUGHTER]

play81:01

And he's out there fixing telescopes,

play81:06

and he's absolutely a prodigy.

play81:11

And now he's working on this theory of propagators,

play81:17

which he claims is relevant to AI,

play81:21

and I don't understand it yet.

play81:24

But--

play81:26

AUDIENCE: It's good.

play81:27

MARVIN MINSKY: What?

play81:28

AUDIENCE: [INAUDIBLE].

play81:30

MARVIN MINSKY: I'd like to see it

play81:31

solve some interesting problem.

play81:33

But-- so we have a lot of resources here.

play81:39

But if you look at the world as a whole--

play81:46

AUDIENCE: Yeah, for example, you talked

play81:49

about how we should combine [INAUDIBLE] group

play81:56

with the [INAUDIBLE] knowledge base.

play81:59

So I feel like doing that, we need

play82:04

some newly invented machinery.

play82:11

MARVIN MINSKY: Yeah, to what extent is--

play82:12

AUDIENCE: Well, if you would like to work on it,

play82:14

please come see me afterwards/

play82:17

MARVIN MINSKY: It's a very lively group.

play82:30

What's happening to Lenat's group?

play82:32

Is he just hiding or is he--

play82:34

AUDIENCE: No, I think Doug Lenat is on a side project.

play82:37

And it's been steadily growing.

play82:42

And I think one thing--

play82:44

so what was-- he had a very interesting article

play82:47

recently about using it for common sense

play82:49

for medical queries.

play82:51

So the Watson guys said that, you know,

play82:54

they want to apply Watson to medicine.

play82:56

But I think Lenat had a really good article about applying it

play83:00

to medical queries.

play83:01

It was things like--

play83:03

you know, so the doctors would ask things like,

play83:06

which operations, for some disease or something,

play83:11

have complications?

play83:12

And the system would have them say,

play83:13

what's a complication, right?

play83:16

And the complication is when things don't go right.

play83:19

So having a drug reaction be a complication.

play83:23

Leaving a scalpel in a patient could be a complication.

play83:29

So you have to understand some of the ideas of--

play83:33

you know, common sense ideas of what might be a complication

play83:36

and what might cause trouble and those kind of things.

play83:40

And I thought that was a very nice system at the Cleveland

play83:45

Clinic.

play83:45

And the doctors loved it, and they wrote about it

play83:48

in [INAUDIBLE] magazine.

play83:49

I thought that was a real success.

play83:51

MARVIN MINSKY: Oh, I haven't seen that.

play83:54

Dr. Lenat.

play83:57

AUDIENCE: I mean, the problem is that the reason that you

play84:00

haven't heard a lot of applications for so long

play84:04

is because were funded, you know,

play84:06

for decades by three letter agencies in the government.

play84:09

And they did--

play84:10

I think they did actually quite good work for them.

play84:13

Because otherwise, the program would

play84:15

have continued for 25 years.

play84:18

But the problem is, you know, when they do something good

play84:20

for the secret agencies, nobody else finds out

play84:23

about it [INAUDIBLE].

play84:29

MARVIN MINSKY: I have a great story about that,

play84:31

which is almost unbelievable.

play84:35

Which is, I was at a meeting with John Glenn at-- this

play84:40

was a long time ago, when it was just starting.

play84:44

And this was in a building a block from the White House,

play84:50

and it had all these people from some agency

play84:56

about whether AI could help them with their problems.

play85:01

And somebody pulled out some slides and was

play85:07

about to give a lecture.

play85:13

But the shelf that had the projector on it had hinges,

play85:19

and all the screws were missing on one side

play85:23

and it fell down like this.

play85:25

And they fussed for a long time and couldn't

play85:32

get the projector to line up.

play85:36

And then I had this thing.

play85:45

And I took three screws out of--

play85:48

it had three hinges and I took three screws out of here

play85:51

and put them in here and here and here.

play85:57

And then the shelf stayed up and the show went on.

play86:03

You know, it's like the joke about the--

play86:08

anyway.

play86:10

So they were astounded because I actually

play86:14

fixed this stupid thing.

play86:16

And I said, well, why didn't you?

play86:18

And they said, we asked maintenance three weeks ago

play86:22

and they never got around to it.

play86:24

And I said, this is the agency?

play86:27

And they said yes.

play86:30

And then they said, but why did you have that thing with you?

play86:36

[LAUGHTER]

play86:37

AUDIENCE: That's OK.

play86:38

You never get past the metal detector.

play86:45

MARVIN MINSKY: When I was a kid, I heard some story--

play86:48

oh, never mind.

play86:50

About when a car wheel rolls off,

play86:53

you take one screw from each of the other three.

play86:56

So I was doing exactly that, and these agency people had never

play87:03

thought of doing it themselves.

play87:06

So what does it mean when you have a government run by people

play87:09

who can't fix this hinge?

play87:17

I once met a freshman who didn't know which way to turn a screw.

play87:24

At MIT.

play87:28

How many of you have to try both--

play87:31

[INTERPOSING VOICES]

play87:40

AUDIENCE: [INAUDIBLE] screw.

play87:45

MARVIN MINSKY: The left hand.

play87:48

Some rule, right.

play87:49

AUDIENCE: You're not doing the [INAUDIBLE] anymore.

play87:52

AUDIENCE: Well, if you're screwing into weird angles,

play87:55

like [INAUDIBLE] pieces and stuff.

play87:57

MARVIN MINSKY: Yeah, sometimes.

play87:58

AUDIENCE: [INAUDIBLE].

play88:07

MARVIN MINSKY: That's right.

play88:12

Enough stories.

play88:15

AUDIENCE: Are you sure?

play88:17

[LAUGHTER]

play88:18

MARVIN MINSKY: So has he--

play88:19

oh, can you send us a pointer to that paper?

play88:22

Lenat's?

play88:23

AUDIENCE: Oh yeah, sure.

play88:24

MARVIN MINSKY: That would be nice.

play88:27

He's one of the great pioneers of AI.

play88:33

AUDIENCE: I guess I have a question

play88:35

about like extracting a piece of knowledge from experience.

play88:40

I feel like this is something that I think we reflectively

play88:49

[INAUDIBLE] all layers.

play88:51

But maybe-- it's probably also this-- probably the

play88:56

reflective layer.

play88:59

So how do you think it does that?

play89:03

MARVIN MINSKY: How do you retrieve your knowledge?

play89:05

AUDIENCE: How do you turn an experience into a piece of--

play89:13

a rule?

play89:16

MARVIN MINSKY: How do you learn from an experience?

play89:18

AUDIENCE: Yeah.

play89:20

MARVIN MINSKY: You do something and then you get some knowledge

play89:24

and where do you put it?

play89:25

AUDIENCE: How do [INAUDIBLE]?

play89:28

How general [? do you need ?] it?

play89:30

Or do you just try it?

play89:32

Do you have to group a lot of experiences

play89:36

and then results [INAUDIBLE]

play89:41

MARVIN MINSKY: If we could answer that,

play89:42

we could all quit and go home.

play89:46

You're asking the central problem of,

play89:50

how do you learn from experience and later retrieve

play89:55

how you learned?

play89:57

I just can't think of any way to answer

play90:00

that except write a whole book and then have everybody

play90:06

find out what's wrong with it.

play90:14

Yeah, science is making the best mistakes.

play90:20

If you make a really good mistake

play90:22

then somebody will fix it and you'll get progress.

play90:27

If you make a silly mistake, then nothing is gained.

play90:37

AUDIENCE: I think we should have the surgeon [INAUDIBLE]

play90:40

make better mistakes.

play90:42

MARVIN MINSKY: Well, how do you decide what to try?

play90:45

Yeah.

play90:49

That's my complaint about the probabilistic methods.

play90:55

Because if there are a lot of--

play90:58

well, I talked about it the other day.

play91:00

If there are a lot of different aspects of the situation,

play91:04

like 100, then there's 2 to the 100th conditional probabilities

play91:09

to think about.

play91:11

And so probabilistic learning machines

play91:18

work wonderfully well on small problems

play91:23

where the search trees aren't too big.

play91:26

But they don't-- but the hard problem is what to do when

play91:31

there is a lot of different factors and you don't know

play91:35

which are important.

play91:37

And in lots of situations, just first order correlations,

play91:42

there are 100 factors, and you just

play91:45

look at the probabilities of each of them.

play91:49

And then there's 10,000--

play91:50

5,000 pairs of things.

play91:53

And you look at the 5,000 joint conditional probabilities

play91:59

of two things, and maybe five of them pop up,

play92:03

and you've only got five things to look at.

play92:06

And that's where that kind of AI system works.

play92:12

And it's become immensely popular.

play92:15

And the trouble is, it'll never get smarter.

play92:18

Because if you have to look five steps ahead,

play92:22

then instead of 10 possibilities,

play92:25

you have 100,000.

play92:27

And anyway, my concern is that there

play92:39

are quite a lot of millions of dollars going into AI research.

play92:44

But most of it is going into dead ends.

play92:47

So it's not as though there were--

play92:50

maybe there is enough money, but it's

play92:54

going to the easy problems instead of the hard ones.

play93:05

Who has an easy question?

play93:09

AUDIENCE: So lots of our people like

play93:10

to play games and procrastinate.

play93:13

Do you think artificial intelligence will also

play93:16

play games and procrastinate?

play93:21

MARVIN MINSKY: Well, there's the opposite question.

play93:28

I got a message from somebody I don't remember who--

play93:33

I had complained that nobody's been able to get Push Sing's AI

play93:41

program to work.

play93:44

And somebody suggested-- what?

play93:49

Yeah.

play93:51

And somebody suggested that--

play93:54

I forget the name.

play93:56

There's some group of people who like problems.

play94:01

And I can't remember what--

play94:04

it's just a bunch of people out on the web

play94:07

who like to solve programming problems.

play94:11

And this person suggested sending the code

play94:14

to that group of a couple of thousand people

play94:17

and maybe they would self organize

play94:20

to try to figure out how it works and fix it.

play94:23

So do you think that would--

play94:26

could that work?

play94:28

AUDIENCE: Yeah.

play94:28

MARVIN MINSKY: We have a big bunch of code.

play94:31

It's partly commented.

play94:33

Could we get 1,000 really aimless hackers out there

play94:38

with lots of ability to--

play94:41

so maybe I'll try it.

play94:50

Sort of--

play94:51

AUDIENCE: [INAUDIBLE] is that they

play94:52

might have their own code or their own sections.

play94:56

MARVIN MINSKY: Well, if they're self organizing enough.

play95:02

I mean, if an individual tries to fix it, that's fine.

play95:06

But maybe these people know how to work together.

play95:09

So they could chop it up and talk to each other and agree.

play95:14

It doesn't have to be the same as Push's.

play95:17

It just has--

play95:18

I don't know if you've seen the movie.

play95:19

I'll bring it next time.

play95:22

You had a robot coming to try to screw the legs onto a table,

play95:28

but the robot has only one hand.

play95:31

So there's another robot over there.

play95:32

The first one says, help.

play95:35

And the other one figures out just enough

play95:39

to come over and pick up the other end of the table.

play95:43

So as far as I know, this thesis only worked out one example.

play95:48

AUDIENCE: Yeah.

play95:49

Actually it was-- the tricky part

play95:52

in that was that when the other robot said [INAUDIBLE]

play95:58

the other robots looks away.

play96:02

Well, you know this [INAUDIBLE] So the other one [INAUDIBLE]

play96:09

the other robot [INAUDIBLE] The first robot is

play96:13

trying to take the table apart.

play96:15

MARVIN MINSKY: Yes.

play96:16

AUDIENCE: So then you have the second robot doing that.

play96:19

And then you have to correct it, no.

play96:21

Then you go back and show it [INAUDIBLE] fix it.

play96:24

MARVIN MINSKY: Right.

play96:24

And the first robot just says no.

play96:28

Which the other robot has to be very

play96:31

stupid to be able to interpret that as exactly one thing

play96:37

not to do.

play96:39

If it were smarter, it probably wouldn't work.

play96:42

But anyway, I'll bring the movie in.

play96:52

Have you looked at the code?

play96:55

AUDIENCE: [INAUDIBLE] debug it myself.

play96:56

MARVIN MINSKY: Yeah.

play96:57

It looks pretty horrid.

play96:59

[INTERPOSING VOICES]

play97:06

MARVIN MINSKY: My favorite story, which I think is true,

play97:09

is that Slagle's program for doing integration for--

play97:15

yeah, was about five pages of lisp.

play97:20

And Joel Moses said that it took him several weeks

play97:24

to figure out--

play97:28

because Slagle was blind and had to program in Braille.

play97:32

So Joel said that he made the most intricate convoluted

play97:39

expressions so that he wouldn't have to type so much.

play97:50

And then Joel-- so that was the first.

play97:52

I had written a program that differentiated

play97:55

algebraic expressions.

play97:58

And that was a great breakthrough

play98:00

although it was completely trivial.

play98:03

Namely, I just put in the letter D.

play98:06

And if it saw an expression x times y,

play98:10

it would say xdy times dy plus y times dx.

play98:16

And there are only four or five such rules.

play98:19

Then just sweep through until it had

play98:21

this big long expression, and that

play98:24

turned out to be the derivative.

play98:27

But then Joel wrote--

play98:28

the trouble is it was too long.

play98:30

And then Moses wrote something to simplify it.

play98:35

And then Slagle wrote a simple integration program.

play98:42

And then Moses wrote a really complicated one.

play98:47

And eventually, a couple of other mathematicians

play98:57

studied that and extended it and worked out a theory of--

play99:03

for the final integration program,

play99:09

that could integrate any expression

play99:11

that had an integral in close to algebraic form.

play99:16

Which means a function of exponents, sines and cosines

play99:19

and polynomials.

play99:26

And the result of that was a sort

play99:31

of nice story, which is that the American mathematical society

play99:35

had a big suite of rooms in Providence, their headquarters,

play99:43

where they had collected all the integrals that

play99:45

were known for hundreds of years,

play99:49

ever since Newton did the first ones.

play99:52

And there were rooms full--

play99:56

So every time somebody found a new integral,

play99:59

they would write it up and send it

play100:01

to the American Math Society.

play100:03

And it would get cataloged there.

play100:05

And they had raised funds for--

play100:09

it was called the Bateman Manuscript Copy.

play100:12

And there was a fund for organizing all this data.

play100:16

And the minute the program came out,

play100:21

the Bateman Manuscript Project was terminated and closed.

play100:25

[LAUGHTER]

play100:29

Because-- and I think Maxima had the solution in it.

play100:35

And Mathematica is the sort of big successor to that.

play100:40

But it was a nice piece of history spread

play100:43

over about five or six years.

play100:46

And I don't know that anybody works on that anymore.

play100:57

We had a couple of PhD theses starting

play101:01

of trying to solve differential equations,

play101:05

and they didn't get very far.

play101:12

That's probably an important--

play101:17

it looks like Lemelson is over.

play101:21

[INTERPOSING VOICES]

play101:21

AUDIENCE: It's the 100K.

play101:23

AUDIENCE: Elevator Pitch.

play101:26

MARVIN MINSKY: You think they actually awarded one?

play101:28

AUDIENCE: Yeah!

play101:29

AUDIENCE: Probably.

play101:30

[INTERPOSING VOICES]

play101:35

AUDIENCE: Well, they have 100K.

play101:37

I think the Elevator Pitch contest as separate things.

play101:40

AUDIENCE: Yeah, they-- oh.

play101:42

There's three parts of it.

play101:43

That's the first one.

play101:48

MARVIN MINSKY: Well, any more--

play101:57

oh, way back.

play101:58

AUDIENCE: Do you think there's ever

play102:00

going to be a way to crowd source the AI research at all?

play102:08

MARVIN MINSKY: That's what I meant.

play102:09

That's the expression I was looking

play102:11

for for fixing the push thesis.

play102:13

But it would be nice.

play102:14

AUDIENCE: It wouldn't be a self-organizing thing.

play102:16

Like someone would have to--

play102:18

I mean, is that--

play102:19

I feel like that would not be-- it would be hard for people

play102:23

to self-organize to do that.

play102:26

But there were already [INAUDIBLE] structure.

play102:27

And that minimal piece that everyone

play102:30

could do for AI research that's already defined.

play102:34

Do you think AI research is structured in a way that

play102:38

could never be broken down?

play102:39

MARVIN MINSKY: Well, don't these crowd things usually

play102:42

start with some--

play102:44

they must start with some sort of leader

play102:46

but then they become self organizing or--

play102:51

AUDIENCE: I mean, they [? weren't. ?]

play102:53

Because every participant had a specific-- has

play102:57

a specific and distinct-- had basically the same small

play103:00

[INAUDIBLE] And they don't become more complex than that.

play103:08

I mean because of the community or whatever.

play103:10

But it doesn't-- the idea of lowering the floor of doing AI

play103:17

research so that more people can contribute.

play103:23

MARVIN MINSKY: It's a nice question.

play103:25

Well, let's think about it.

play103:26

If we take this PhD thesis code, it

play103:36

wouldn't be much good to send the whole thing to everyone.

play103:40

Well, of course it wouldn't cost anything to do that.

play103:43

But you need somebody to make a first pass at chopping it

play103:47

up into, say, look at this function didn't work.

play103:52

Maybe there's some code missing.

play103:55

So you might need a person or a couple of people

play103:59

to sort of organize the project.

play104:02

But once you've got a community, they

play104:08

might be able to cooperate.

play104:12

AUDIENCE: [INAUDIBLE] already [INAUDIBLE]

play104:14

to be able to contribute.

play104:16

MARVIN MINSKY: They might do without a leader

play104:18

once the problems became clear enough.

play104:21

AUDIENCE: [INAUDIBLE].

play104:23

I would say that, I mean, there are some crowdsourced AI

play104:27

projects.

play104:27

Certainly, if you go to Source [INAUDIBLE]

play104:30

or the restaurant game of That's crowdsourced.

play104:35

[INAUDIBLE] crowd sourcing [INAUDIBLE]..

play104:37

But I don't think the crowd sourcing

play104:38

is really great for problems that enable our creativity.

play104:42

[INAUDIBLE] commands that are--

play104:45

projects that are labor intensive.

play104:47

It's good for [? study ?] at home, that kind of thing.

play104:51

But for projects that demand a lot of creativity,

play104:55

it kind of breaks my heart almost.

play104:57

Because if you look at like the open source Unix,

play105:01

you know they've done a great job at organizing people

play105:04

to work on Unix and three versions of Linux

play105:07

and three versions of Unix.

play105:08

But on the other hand, the software isn't very innovative.

play105:12

You know, they just implement 60 versions of the [INAUDIBLE]..

play105:15

And the Unix interface hasn't changed since the 1960s.

play105:21

pretty much so.

play105:23

You know, it's everybody still programming

play105:25

on terminal windows.

play105:27

So I think it's--

play105:28

crowdsourcing is mainly good for projects

play105:31

that are labor intensive.

play105:32

I think AI, you would see individual creativity more.

play105:37

It's like McCarthy said.

play105:39

We'd see 2.3 Einsteins and 1.7 Manhattan Projects.

play105:44

That, you know, [INAUDIBLE] it's probably good

play105:48

for the Manhattan Projects but not for the Einsteins.

play105:51

MARVIN MINSKY: On the other hand, given we have the movie,

play105:55

it might be that the problem of getting this code to make

play105:58

that movie isn't so creative.

play106:01

You could see-- you can start it up and see where it gets stuck,

play106:07

and--

play106:07

it's worth a--

play106:09

AUDIENCE: [INAUDIBLE] if you have something

play106:11

along the lines of Watson that incorporates lots

play106:14

and lots of small programs.

play106:15

And you can have people contribute small programs

play106:18

whether they're good or bad.

play106:20

[INAUDIBLE] figure it out and then [INAUDIBLE]..

play106:22

AUDIENCE: Well, in some sense, Watson was crowdsourced.

play106:25

Because it wasn't only developed by that IBM group.

play106:29

They had Watson collaborators in ISI and CMU and other places.

play106:35

And they crowdsourced it by getting little research grants

play106:39

to integrate their part of the research into Watson.

play106:44

So I think you could argue that actually was crowdsourced.

play106:48

MARVIN MINSKY: Another thing we haven't tried

play106:51

is called throwing money at it.

play106:54

If we-- suppose we got $500,000 or $1,000

play106:59

and told some programmer, can you get this to work?

play107:06

AUDIENCE: Well, if you have good enough inspirations

play107:08

of various small programs that [INAUDIBLE] something,

play107:14

then I think that part of the problem

play107:18

is creativity [INAUDIBLE].

play107:20

Because some parts-- or some [? soft ?] programs could be

play107:24

more [? stupid. ?] And as long as you have a description

play107:27

of what that program is doing, then you could have some really

play107:30

creative program, that's--

play107:32

that might use those [? stupid ?] programs.

play107:35

[INTERPOSING VOICES]

play107:37

AUDIENCE: But you still need that one very creative person

play107:39

to [INAUDIBLE]

play107:42

AUDIENCE: But you must be so--

play107:43

[INAUDIBLE]

play107:46

AUDIENCE: Yeah.

play107:47

I'd [INAUDIBLE] World of Warcraft 10

play107:49

and leak it on the internet.

play107:50

[LAUGHTER]

play108:05

MARVIN MINSKY: My grandson was suspended

play108:09

from Warcraft for three weeks because he hacked the thing

play108:16

to get a higher priority on something.

play108:19

[LAUGHTER]

play108:21

I think he was not--

play108:25

do you know how old he was?

play108:27

AUDIENCE: [INAUDIBLE]?

play108:28

MARVIN MINSKY: Yeah.

play108:30

No, Miles.

play108:31

AUDIENCE: No.

play108:31

I don't think he's played one of those games in a long time.

play108:34

MARVIN MINSKY: No, I think he was about 10 or 12.

play108:38

And he had actually managed to get into the thing

play108:41

and get instant service.

play108:48

He was very proud of being banned.

play108:50

[LAUGHTER]

play109:00

I give up.

play109:04

Any last request or idea?

play109:11

Thanks for coming.

Rate This

5.0 / 5 (0 votes)

Do you need a summary in English?