8. Question and Answer Session 2
Summary
TLDR在这段视频脚本中,Marvin Minsky,人工智能领域的先驱之一,与观众进行了深入的对话。他讨论了教育模式的变化,包括国际学生的表现和女性在科技领域的比例增长。Minsky 教授分享了对过去几十年学生模式的观察,并对当前学生成为研究人员或教师的比例下降表示担忧。他回顾了1960年代的学术和研究机构的增长,包括通用汽车和IBM在内的公司如何支持基础研究。此外,Minsky 教授还探讨了意识、认知心理学、人工智能的未来,以及如何从经验中提取知识。他对于人工智能是否能够通过集体智慧或社区驱动的研究取得进展持开放态度,但也指出了这种方法在需要创造性解决方案的问题上的局限性。整个讨论涉及了广泛的主题,展示了Minsky 对人工智能、认知科学和教育的深刻见解。
Takeaways
- 📚 马文·明斯基(Marvin Minsky)认为,尽管他教书多年,但并没有固定的讲座模式,而是鼓励学生提出问题并进行探讨。
- 🌍 明斯基观察到不同国家的学生有不同的教育背景,他提到外国学生似乎比美国学生接受的教育更好。
- 👩🎓 他注意到MIT的女生比例从他加入时的20%增加到了48%或53%,显示了性别比例的变化。
- 🎓 明斯基提到,过去大多数学生最终成为研究人员或教职员工,但现在这样的学生变得很少,他不确定其中的原因。
- 📈 他回忆说,20世纪60年代的大学和研究机构还在成长,很多大型研究机构如IBM和通用汽车都支持基础研究。
- 🔬 明斯基讨论了他在RAND公司的时光,那里有很多基础研究,并且有一个自由的学术环境。
- 🧠 他提出了关于意识和思维的问题,质疑了当时流行的左脑/右脑人格类型理论,并认为这种分类过于简化。
- 🤖 关于人工智能,明斯基讨论了进化策略在创造AI方面的局限性,特别是因为进化过程不记住失败的教训。
- 🧬 他提出了对遗传学的看法,包括对致命隐性基因的清除,以及这可能对未来人类健康的影响。
- 🌐 明斯基还探讨了信息集成与意识的关系,以及大脑如何处理和整合信息。
- ⚙️ 最后,他谈到了AI研究的组织问题,包括如何通过社区和协作来推进AI技术的发展。
Q & A
马文·明斯基在对话中提到了哪些因素影响学生的教育质量?
-马文·明斯基提到外国人相比美国本地人似乎受到了更好的教育。他还提到了性别比例的变化,指出当他刚到麻省理工学院时,女生大约只占20%,而现在大约是48%。
明斯基教授如何看待过去几十年学生群体的变化?
-明斯基教授观察到学生群体中女性比例的增加,以及国际学生相比美国本地学生似乎受到了更好的教育。他还注意到,过去大多数学生最终会成为研究人员或教职员工,但现在这样的学生变得很少。
明斯基教授认为为什么现在的学生相比以前更少成为研究人员或教职员工?
-明斯基教授提出了几个可能的原因,包括1960年代大学和研究机构的增长,以及像IBM和通用汽车这样的大公司支持的基础研究。他提到现在这样的研究机构变少了,而且即使是像CBS实验室和西屋电气这样的机构,现在也不如以前。
明斯基教授对于学术职位的减少和学术职业的困难有何看法?
-明斯基教授表达了对学术职位减少的担忧,认为这使得想要成为教授的职业道路变得更加困难。他提到,许多人很早就意识到这一点,并转而在华尔街等地寻找工作,同时在业余时间进行研究。
在对话中,明斯基教授提到了哪些历史上的研究机构或公司对AI和计算机科学有重要贡献?
-明斯基教授提到了IBM、通用汽车、CBS实验室、西屋电气、斯坦福研究所和RAND公司等,这些机构在早期对计算机科学和人工智能的研究有重要贡献。
明斯基教授对于意识的看法是什么?
-明斯基教授对于意识的看法是,他认为意识不是一个中心的谜团,而是当一个好的心理学理论出现时,意识的问题将会自然解决。他批评了一些哲学家和心理学家将某些感知体验(如颜色的感知)视为不可分割的基本问题,即所谓的感受性(qualia)。
明斯基教授对于进化在创造人工智能方面的潜力有何看法?
-明斯基教授认为,尽管进化过程可以产生复杂的生命形式,但它在创造人工智能方面有局限性。他指出,进化过程中没有记录下失败的突变,这导致致命的突变最终总会杀死某些个体。他认为,如果我们能够从失败中学习并避免同样的错误,那么在人工智能的开发中可能会更有效。
明斯基教授如何看待当前人工智能研究的进展?
-明斯基教授认为,尽管有许多资金投入到人工智能研究中,但很多都流向了错误的方向。他担心大部分研究都集中在解决简单问题上,而不是更困难的问题。他提出,需要更多创新和创造性的方法来推动人工智能领域的发展。
明斯基教授对于人工智能未来可能参与政府领导的想法是什么?
-明斯基教授以幽默的方式回应了这个问题,提到在许多科幻故事中,人工智能已经成为政府的领导者,但他认为这更多是一种虚构的设想。
在对话中,明斯基教授是否提到了关于如何改进人工智能系统的具体建议?
-明斯基教授提到了一些关于改进人工智能系统的想法,例如通过更高层次的反思和自我反思来改进系统。他提到了在人工智能系统中加入“批评者”(critics)的概念,这些批评者可以指出系统中的错误并防止它们再次发生。
明斯基教授对于人工智能在医学领域的应用有何看法?
-明斯基教授提到了Doug Lenat在医学领域的一个成功案例,其中人工智能系统被用来回答医生的查询,例如关于手术并发症的问题。他认为这个系统在克利夫兰诊所得到了医生们的喜爱,并在杂志上被提及,是一个真正的成功案例。
明斯基教授是否讨论了人工智能研究的资助问题?
-明斯基教授讨论了人工智能研究的资助问题,他提到很多资金流向了简单的问题,而不是更难的问题。他还提出了一个想法,即给程序员一笔钱,看他们是否能够解决某个特定的人工智能问题。
Outlines
😀 对话开场与MIT OpenCourseWare的支持
开场时提到内容在创意共享许可下提供,并鼓励支持MIT OpenCourseWare以持续提供免费的高质量教育资源。Marvin Minsky提到自己没有准备讲座,而是接受观众的随机问题。观众询问Minsky作为长期教师是否注意到学生中存在模式,Minsky提到外国学生似乎比美国学生更有教育,MIT的女生比例也有所增加。
😉 学术趋势与个人见解
Minsky讨论了他观察到的学生变化,包括学生毕业后的职业道路选择。他回忆了1960年代的学术和研究机构的增长,提到了包括IBM在内的几个大型研究机构,并表达了对当前研究状况的担忧。他还提到了RAND公司,并询问了观众对该公司的看法。
🤔 学术职业的挑战与台湾的学术发展
Minsky讨论了学术职业的挑战性,指出现在的学术职位比过去更难获得。他提到了台湾在数学领域的部门扩张,并询问观众对此事的看法。他还提到了意大利政府对AI研究小组的影响,以及历史上个别研究者如Isaac Newton的例子。
🧐 心理学与认知机制
Minsky探讨了心理学和认知机制的话题,包括他对心理学如何从逻辑学发展而来的看法。他提到了David Hume、Spinoza和Kant等哲学家对认知理论的贡献,并讨论了心理学作为一门学科的形成。
😶 大脑与意识的对话
Minsky和观众就大脑、意识和记忆的工作方式进行了深入讨论。他们讨论了左右脑的区别、意识的定义以及信息整合与意识的关系。Minsky对意识作为一个难以定义的概念表示了怀疑,并提出了对心理学中‘困难问题’的不同看法。
🤓 心理学理论的实用性
Minsky质疑了一些心理学理论的实用性,特别是关于意识和思考的理论。他强调了在心理学研究中寻找明确问题和解决方案的重要性,并批评了一些哲学观点对心理学研究的潜在危害。
😲 遗传、进化和人工智慧
Minsky和观众讨论了遗传学、进化论和人工智能之间的关系。他们探讨了遗传学中的隐性基因、进化过程中的挑战以及人工智能发展的可能性。Minsky预测,未来的技术可能会清除人类基因组中的致命隐性基因。
🧬 遗传信息与细菌共生
Minsky提出了关于遗传信息和细菌共生的话题,探讨了人体肠道中细菌的基因组可能对人体健康的影响。他们讨论了细菌与人类细胞的比例,并提出了关于细菌在人类进化中作用的假设。
🤖 人工智能的未来
Minsky对人工智能的未来进行了展望,讨论了AI在解决特定问题上的潜力和局限性。他还提到了AI在政府领导中的可能性,并探讨了如何通过计算机和编程方法来理解和构建智能系统。
🧐 心理学与AI的关系
Minsky讨论了心理学与人工智能之间的关系,他认为虽然心理学的知识有助于理解人工智能,但AI的发展并不迫切依赖于心理学的进步。他强调了AI研究的孤立性,并呼吁更多的跨学科合作。
😏 AI研究的挑战与机遇
Minsky和观众讨论了AI研究的挑战,包括资金分配、研究激励结构以及如何吸引更多的人才参与AI研究。他们还讨论了AI研究的创新性以及如何通过合作和开源项目来促进AI的发展。
😮 人工智能的创造性与开源
Minsky对人工智能的创造性和开源项目的可能性进行了探讨。他提出了关于如何通过社区合作来解决复杂的编程问题的想法,并讨论了开源项目在促进创新方面的潜力和局限性。
Mindmap
Keywords
💡MIT OpenCourseWare
💡认知心理学
💡人工智能
💡进化算法
💡神经网络
💡意识
💡群体智慧
💡认知科学
💡问题解决
💡信息表示
💡计算模型
Highlights
MIT OpenCourseWare 通过捐赠和额外材料支持高质量教育资源的免费提供。
Marvin Minsky 观察到外国学生相比美国学生似乎受过更好的教育。
MIT 女生比例的增长,从 Minsky 刚到 MIT 时的20%增长到现在的48%。
Minsky 讨论了学生毕业后的职业道路变化,以及成为研究人员或教职员工的比例减少。
60年代的大学和研究机构如 IBM 实验室的增长对基础研究的支持。
Minsky 对当前研究资助模式的批评,包括资金的年度更新和频繁的报告要求。
Minsky 讨论了学术职业的挑战性,以及学生早期认识到这一点并转向其他领域如华尔街工作的趋势。
Minsky 探讨了台湾新成立的数学系和政府决策对研究成功的影响。
Minsky 对意识的讨论,包括对 Umberto Eco 意识观念的批评。
Minsky 对心理学和认知科学领域的贡献,以及它们与 AI 研究的关系的看法。
Minsky 对于通过进化方法创造人工智能的可能性和局限性的讨论。
Minsky 讨论了人类基因组中的非编码区域,包括旧病毒的残留和致命基因的携带。
Minsky 对于未来基因组编辑技术可能带来的变革,如消除致命隐性基因的展望。
Minsky 对于人工智能在政府领导选举中的可能性的看法。
Minsky 对于人工智能研究中的创造性和工程性问题的看法,以及心理学对 AI 发展的潜在贡献。
Minsky 讨论了人工智能研究的资助问题,以及资金如何可能流向易于解决的问题而非难题。
Minsky 对于人工智能玩游戏和拖延的可能性的看法。
Minsky 探讨了如何从经验中提取知识,并将其转化为规则或学习的问题。
Minsky 对于人工智能研究是否能够通过众包方式进行的讨论。
Transcripts
The following content is provided under a Creative
Commons license.
Your support will help MIT OpenCourseWare
continue to offer high quality educational resources for free.
To make a donation or to view additional materials
from hundreds of MIT courses, visit MIT OpenCourseWare
at ocw.mit.edu.
MARVIN MINSKY: Well, I don't have a lecture.
Go ahead.
AUDIENCE: I had a random question.
MARVIN MINSKY: Great.
AUDIENCE: So you've been a teacher for a very long time.
Have you noticed any patterns in the students
over the years or decades?
MARVIN MINSKY: Have I noticed any pattern in students?
AUDIENCE: Yeah, like intellectual patterns or just
people you're interested in, just anything.
MARVIN MINSKY: Well, a few.
The foreigners seem better educated than the Americans.
There are more girls.
When I came to MIT, it was about 20%.
And I think now it's 53%.
Does anyone know?
AUDIENCE: It's like 48%.
MARVIN MINSKY: What?
AUDIENCE: 48%.
MARVIN MINSKY: 48%?
I read that it actually went past 50 for a few minutes.
AUDIENCE: [LAUGHS]
MARVIN MINSKY: No, I think I've complained about the future
though, which is that a large proportion of my students,
by students I mean the ones whose thesis--
I hate to say supervised, because in the case of Pat
Winston, for example, I learned much more than I--
or Sussman.
But most of the students became researchers or faculty members
eventually.
And now it varies.
Now very few of them do.
I'm not sure of all the reasons.
In the 1960s, which is a long time ago,
the universities were still growing,
as an after effect of World War II, I suppose.
I really don't know what caused these major trends.
But there were also a lot of career research institutions
that were large and growing.
Even General Motors had places where
there was some basic research.
IBM was a big research laboratory
that was supporting some very abstract and basic research
of various sorts.
I don't think there's very much of that now.
Even CBS Laboratory.
Westinghouse was doing interesting robotics.
And of course Stanford Research Institute,
which had no relation to Stanford.
Still exists, and it's still pretty good.
But in those early days, it was one of the three or four
richest computer science and artificial intelligence
research places.
There a place called the RAND Corporation,
which I think still exists.
Does anybody--
AUDIENCE: Yeah.
MARVIN MINSKY: I don't know what it does.
Any idea?
AUDIENCE: They do government [INAUDIBLE]
sort of things, just in terms of writing and [INAUDIBLE]
AUDIENCE: They make some pretty important things but not
necessarily about war, economy games, or politic [INAUDIBLE]
MARVIN MINSKY: But in the 60s, it had a lot of basic research.
It had Newell and Simon and me and a few other people.
And we just went there, and you could
walk on the beach in Santa Monica and go to your office
and talk and do things.
And no one ever bothered us.
And we wrote lots of little papers.
Anyway, grumble, grumble.
Another feature was that places like the National
Institute of Health had five year fellowships.
And now you have to renew--
there are very few appointments of that sort anywhere.
And usually, no sooner do you get
funded than you're starting to write
proposals for the next year.
And some people want reports every quarter.
And Neil Gershenfeld, who was running a big lab here,
wanted reports every month.
And some of us finally gave up on that.
That's a long answer.
So if you want a career in being a professor,
it's just harder to find now than it was then.
And so a lot of people recognize this pretty early
and find some place to work in Wall Street and stuff
like that.
There are lots of jobs for smart people.
But then you have to sneak your research in on the side.
Anybody can think of a way to fix it?
[LAUGHTER]
In the last 20 years, Taiwan made 100 new math departments
I read somewhere.
I don't know if any of you who know anything about Taiwan.
I just wonder if that--
AUDIENCE: Yeah.
MARVIN MINSKY: Yes, were they successful?
[LAUGHTER]
Is there a lot of research there?
AUDIENCE: No.
MARVIN MINSKY: Very often, when a government
decides on the right thing to do, it doesn't work.
I had some friends in Italy who were
trying to start an AI group.
And they had accumulated a critical mass in--
what's the big city in the north--
Milan.
And then some government committee
said, oh, there is a bunch of computer sciences there,
but there's no good computer scientists in Pisa and Verona.
So the government can order a professor to leave one place
and go somewhere else.
So the next year, there were no groups.
And occasionally, there are people like Isaac Newton
who liked to work alone.
[LAUGHTER]
But I got the impression that the product
of the Italian researchers diminished after that.
Might be wrong.
How about a more technical question?
Thanks.
AUDIENCE: It looks like you had a complicated diagram
concerning story.
Do you recall of any layers?
MARVIN MINSKY: Yeah.
AUDIENCE: Was that meant to be a bi-directional diagram?
Because it worked from the bottom
up as well as the top down.
MARVIN MINSKY: I'm confused about whether that--
let's see if I can find it.
Why did this shut down?
Do I dare press start?
AUDIENCE: It's alive on the screen.
MARVIN MINSKY: Oh my gosh.
[LAUGHTER]
I never saw that phenomenon before.
AUDIENCE: Could you do [INAUDIBLE] displays?
MARVIN MINSKY: Yeah, I can.
[LAUGHTER]
AUDIENCE: There should be a button that changes [INAUDIBLE]
AUDIENCE: Oh, here we go.
MARVIN MINSKY: What?
Did it go on?
AUDIENCE: [INAUDIBLE]
MARVIN MINSKY: Oh, it's up.
Oh well.
It might be in this random lecture.
How do I get rid of those?
AUDIENCE: I think you might be able to go into View at the top
to get rid of it.
MARVIN MINSKY: There's a sort of bug in the tool box
thing on the Macintosh, which is,
if you make one of these too long,
there's no way to get rid of it except to restart
the machine in some other mode.
I can't catch it.
Maybe this works.
Oh well.
That diagram, there's two hierarchical diagrams.
The theme of the emotion machine book
is mostly the six layers of instinctive, built-in
reactions, learned, conditioned reactions, and going up
to reflective and self reflective and so on.
And the other diagram starts out with just a neural net,
and then things like K-lines, which
are ways to organize groups of activities,
and then frames and trans frames.
A trans frame is a way of representing knowledge
in terms of how an action effects
a situation or a particular situation
and an action produces a new one.
And then a story is usually a chain of trans frames.
And of course, a meaningful story
is one which I didn't have a level for,
good stories and useless stories.
So somewhere at a very high level,
we all have knowledge of, if you're
facing some sort of problem, what kind of strategy
might be good for solving that kind of problem?
And in that case, each layer is made
of things in the lower layer.
Whereas in the society of mind hierarchy,
each layer does different things that
operates on the result of the other layers.
I guess if you look at any mechanism,
you'll have a diagram of what the parts do
and how they relate.
And you'll have a diagram of which
isn't in the machinery, of what are
the functions of the different sets of parts
and how are those functions related?
So that might be a bug in both books,
that I drew the diagrams to look pretty similar.
It's a bad analogy.
AUDIENCE: [INAUDIBLE] was it a stimulus-response model,
where if you fed a story into it, beneath it
were the interpretive mechanisms?
But does it flow the other way?
Is it generative from bottom to top as well?
MARVIN MINSKY: Well, in some sense,
this trans frame says, here's a piece of knowledge, which says,
if you're in such a situation, this is a way
to get to another situation.
In the traditional behavioristic--
behaviorist is a word for the class of generations
of psychologists who tried to explain behavior just in terms
of reacting to situations.
And that wasn't connected to--
what am I trying to say?
In the standard behaviorist models,
which were occupied most of psychology
from the 19th century up to the 1950s when modern cognitive
psychology really started, you just looked at the animal
as a collection of reactions.
And then in cognitive psychology,
you start to look at the animal as having goals and problems.
And then some machinery is used to go from your--
the way you describe your situation,
to generating a plan for what you're going to do about that.
And then the plan ends up being made
of little actions, of course.
But before 1950, there were only a few psychologists who--
and philosophers, I should say, going all the way back
to people like David Hume and Spinoza
and maybe Emmanuel Kant.
They made up-- if you read their stuff and ignore
the philosophy, you see that there was a very slow progress
over really three centuries of trying to get from logic,
which sort of first appears around the time of Leibniz--
when is Leibniz?
1650 or so?
AUDIENCE: [INAUDIBLE].
MARVIN MINSKY: Around, yes.
They never met, I believe.
So a lot of philosophy has--
which I don't know how to describe the rest of it.
But a lot of it is making--
trying to make high level theories of how thinking works.
And it's, of course, mixed with all sorts
of problems about why the world exists and ethics
and what are good things to do and bad and all sorts of mixed
up things.
And psychology doesn't appear--
I don't think there's a name for that field
until the 1880s or so.
Who's the first psychologist you can think of?
AUDIENCE: William James.
MARVIN MINSKY: William James is around 1890.
There's a guy named [INAUDIBLE] in Austria, I think.
Sigmund Freud starts publishing around 1890.
Francis Galton in England is maybe
the first recognizable psychologist.
He has a big book called An Inquiry Into Human Faculty
which makes good reading right now.
Because it has-- each chapter is about a different aspect
of what would be called modern cognitive psychology.
How do people recognize things?
What kinds of memory cues do you use to retrieve stuff?
All sorts of sort of--
they're like term papers, the chapters.
Some little theory.
And you'd say, I can do better than that.
And indeed, you could.
But at that time, no one could.
Yes?
AUDIENCE: I feel like psychology is
thinking about how people think, which I think [INAUDIBLE]..
Aristotle does it.
MARVIN MINSKY: Aristotle has more good ideas than,
as far as I'm concerned, everyone else put together
for the next 1,000 years.
It's just very remarkable.
And we don't know anything about that
because there are no manuscripts.
Anybody-- there's that wonderful play by--
who's the Italian?
What?
AUDIENCE: Dante?
MARVIN MINSKY: No, no, a recent one.
AUDIENCE: [INAUDIBLE].
MARVIN MINSKY: No, he's sort of contemporary--
oh well.
Anyway, he has a play about searching for the lost--
there's some record that Aristotle had a book of jokes,
or rather a book-- he has books on ethics and things like that,
and there's a book about humor which is lost.
And most scholars think it's not important,
because if you look at the 10 existing books on Aristotle--
I think there's about 10--
allegedly by-- and there are students' notes.
And almost every subject appears in at least two of them anyway.
So one conjecture is that there really isn't any--
very much lost from ancient times.
Anyway, if you ever read books, you might as well read one
or two of Aristotle's.
Because it's-- the translations I'm told are pretty good,
and you can actually get ideas from it.
Yes?
AUDIENCE: I don't know if you ever heard about [INAUDIBLE]..
MARVIN MINSKY: Umberto Eco is the writer.
[LAUGHTER]
Sorry.
How does memory work?
Something-- something about your expression.
Sorry.
AUDIENCE: [INAUDIBLE] he tries to explain consciousness.
But you say that consciousness is [INAUDIBLE] work.
But I don't quite agree with his definition.
But basically his definition is that the more
the information is integrated in more portions, being is.
MARVIN MINSKY: The more information you have?
AUDIENCE: The more integrated the information is.
So for example, I don't know, he used the example of a MacBook
that has a lot of information that's not integrated.
Like, it is not correlated, and so it's not very conscious.
MARVIN MINSKY: That sounds like an important idea
and there ought to be a name for it.
AUDIENCE: Yeah, he had something.
But I think [INAUDIBLE] And this guys is, like, a neuroscientist
and psychologist.
And like you see some edge cases of people
that split their brain in half.
And it seems that both halves are kind of conscious.
But I [INAUDIBLE] because that people, they
still have information that's integrated.
But it seems that they are not conscious.
So there must be some action into that information,
even if it's passive or active.
But it seems very interesting.
MARVIN MINSKY: Well which of my 30
features that go into that suitcase do they have?
It doesn't make any sense to say something is conscious or not,
does it?
You just said it yourself, that there's
some degree of integration perhaps.
But can you say what you mean by integration?
You probably need to say 20 things and many of them
might be independent.
Here's an example of something.
Many years ago, people in the 1950s and '60s,
it was very popular to talk about the left and right brain.
Have you heard people say-- what's
the difference between the left brain and the right brain?
AUDIENCE: Rational--
MARVIN MINSKY: Rational versus emotional?
Now I haven't heard anybody discuss that
for the last 15 or 20 years.
AUDIENCE: Although it seems to have
become really enmeshed in popular culture now.
If you asked anybody what they know about the brain, what
the person will say is, well, I'm
more of a right-brained person or a left-brained person.
That seems to be a sticking point.
MARVIN MINSKY: They used to, but I haven't heard
that for at least 15 years.
I have not heard a single person, psychologist,
mention it.
Have you?
AUDIENCE: I think fMRI has all but obsoleted that theory.
AUDIENCE: There's one thing it's good for.
It's disproving that.
MARVIN MINSKY: Anyway, I mention--
in The Society of Mind, I think, I had a grumble about it.
Which is that, as far as I can tell,
it appears to be true that language
is located in most people in two very
definite areas in the left brain but occasionally,
in the right brain of some people.
But other than that, as far as I can see,
when you actually catalogue the differences
that the psychologists reported in the 1960s and '70s, then
the things in the left brain were largely
adult kinds of thinking, and the things in the right brain
were largely childish.
Not-- it wasn't that they were rational or not,
it was that they weren't very hierarchical and tower like.
And I think there was a nice romantic idea
of contrasting emotions and intellect and all
those dumbbell distinctions and projecting them onto the brain.
But I don't know how I--
what started me on that track.
But it's interesting that it was very, very popular
and psychologists talked about it all the time
when I was a student.
And I haven't seen it mentioned by any cognitive psychologist
for--
yeah?
AUDIENCE: So he mentioned this theory, but we don't--
I believe we don't test our theory with edge cases.
So like mental [INAUDIBLE] people or people that--
probably there are a lot of people that--
not a lot, but some percentage of people that are mentally ill
or don't have--
form so well in some part of the brain.
And maybe we can have some idea of like what consciousness
is, just by seeing people that don't
have some part of the brain that might interfere with something.
I don't know.
Like this big brain may give a reason why--
what consciousness is.
Because maybe some half a brain [INAUDIBLE] consciousness.
MARVIN MINSKY: But I don't understand what you're--
you're trying to-- you're trying to construct a meaning
for the word "consciousness."
AUDIENCE: Well, Tony is definitely onto something
interesting.
And I think the reason that he uses the word "consciousness"
is that it's in the sense that people talk
about losing or regaining it.
And so he can actually experimentally test
this theory--
people who are asleep, or in a coma,
or dreaming, or locked in, or is just in a vegetative state.
[INAUDIBLE] this theory actually agrees
with sort of a common-sense idea of whether this person is
conscious in a temporary way.
MARVIN MINSKY: But then is that different from--
if you used the word "thinking" instead, you
could say when somebody is in a coma, they're not thinking.
AUDIENCE: I don't think that it's good for him
to use the word "consciousness."
I think that the word "consciousness,"
to many people, refers to a lot of things
that his theory does not treat at all.
MARVIN MINSKY: See, it's really dangerous if you--
is it Pinker who likes--
I forget.
AUDIENCE: Yeah.
MARVIN MINSKY: It's dangerous to feel sure
that there is something very important
and a central mystery and--
what does he call it?
The hard problem of psychology.
And so here is really a very smart guy, Steven Pinker.
And as far as I can see, he does nothing
but harm to the people he talks to, because he
gets them to do bad experiments and waste their time.
So instead of trying to revive consciousness,
it's worth considering that might be a very bad thing
to do to yourself and other people.
What problem are you trying to solve?
Is there any way--
or the problem of qualia, for example.
Because the standard view--
and this is something that still is a serious disease even today
in philosophy.
That is, the idea that the redness of red things
is a very fundamental thing.
It's indivisible.
It's not describable.
It's like-- to those philosophers,
that's just as important as when--
who was the Greek--
Democritus, was it?
Who discovered atoms?
The idea of atoms was an enormous breakthrough.
Of course, it took 2,000 years before people
realized that, yes, there are atoms and they're not.
They're actually complicated systems
made of quarks and 5 or 10 other things.
So now we don't have atoms anymore.
But I think Pinker has the idea that red is irreducible.
And you can't describe it.
It's like the atom of thought.
And these qualia are the fundamental problem
of psychology.
To me, it's exactly the opposite.
Why do we have a word for it?
When I say red, do you experience the same thing
as anyone else who says red?
And it seems to me that somebody who
got sick after eating a tomato has a different qualia for red
and, you know, blood, violent things, bad.
Maybe another child has all sorts of pleasant associations
with things that are red.
And the concept of red is--
it's not that it's inexpressible because it's indivisible.
It's inexpressible because it's connected with thousands
of other ideas and experiences.
And therefore, there's no way to make
a compact definition of it.
But it's exactly the opposite.
It's not the hard problem of psychology.
It's not a problem--
it's something that will fall out automatically
without any effort when you have a pretty
good theory of psychology.
AUDIENCE: But why do we have these qualia [INAUDIBLE] Why?
MARVIN MINSKY: Why do we have descriptions of things?
Because the animals that don't have
compact descriptions of things get eaten very quickly,
because they can't recognize things that might hurt them.
It's very important to have machinery
for recognizing real things.
And real things have features.
In fact, there is such a thing as redness--
namely, the frequencies of light of what?
Around 400 nanometers?
What's the frequency?
What?
AUDIENCE: 700 nanometers?
MARVIN MINSKY: That far?
That's infrared, isn't it?
AUDIENCE: A little bit.
650, 680.
MARVIN MINSKY: Anyway.
One of the things somebody pointed out to me in later life
is that there's only one yellow.
There are a lot of shades of red but interesting
how tiny the yellow spectrum is.
I don't know what it means.
If you look around a room there--
I don't see a single one.
AUDIENCE: It might be a lion.
MARVIN MINSKY: What?
AUDIENCE: It might be a lion.
MARVIN MINSKY: A lion, yes.
Does anybody see anything yellow in here?
AUDIENCE: [INAUDIBLE] the consistency of yellow light
and can you do it?
[INAUDIBLE]
MARVIN MINSKY: Yes, what element has a bright yellow line?
AUDIENCE: Sodium.
MARVIN MINSKY: Sodium.
It's, yeah, orange-ish.
AUDIENCE: It's orange.
Yellow as the sun.
MARVIN MINSKY: Yes.
Maybe that's very important.
It's in the bin.
That's great.
AUDIENCE: So this color is called warm white.
[LAUGHTER]
MARVIN MINSKY: In the story, yeah.
AUDIENCE: It has a qualia [INAUDIBLE]..
[LAUGHTER]
MARVIN MINSKY: Warm white.
AUDIENCE: Warm white.
MARVIN MINSKY: What is it in Finland?
AUDIENCE: I don't-- it's called--
the light like that [INAUDIBLE] comes from the tungsten--
the [INAUDIBLE] tungsten light bulbs [INAUDIBLE]
MARVIN MINSKY: Yes, that's right.
I've stocked up on 20 watt tungsten bulbs.
Because my house is full of fluorescent bulbs
that are remote controlled by things.
And if there's no incandescent bulb in one of the sockets,
then the remote controller breaks.
These are the things you buy with, what are they called?
Little units that--
AUDIENCE: X10?
MARVIN MINSKY: X10, right.
The old X10 units, the receivers burn out
if there's no resistive load on them.
So I have to have enough incandescent bulbs
for the next 20 years or get rid of the X10s.
I think they're illegal in Japan or have--
they're still there?
AUDIENCE: Yeah.
You can still find it in some shops,
and people buy them so that [INAUDIBLE]
MARVIN MINSKY: I bought a lot of LED light bulbs
at the swap fast the other day.
Back to AI.
AUDIENCE: So the reading, you seem
to imply that evolution is the best strategy for creating AI.
Because, one, it'll take a lot of time.
And two, because you'll get stuck a [INAUDIBLE]..
But if we had infinite time and enough mutation,
do you think it'd be possible to create
a good artificial intelligence using evolution?
MARVIN MINSKY: Well, if there's somebody in charge.
If you have evolution like on a big planet,
then you get a lot of lifeforms.
And so the problem is that you might
have some really stupid life form that eats the smart ones.
But I have a more serious objection to evolution.
You see, there have been several projects in the last--
well, since computer science started--
of trying to make problem solvers
smart by imitating evolution, which
is variation and selection.
So I know of about five or six such projects which were
fairly well funded and serious.
What's most interesting maybe was
the one of Doug Lenat, which was just him by himself.
So if you look up Douglas Lenat's thesis,
which was called--
I forgot the name.
AUDIENCE: AM?
MARVIN MINSKY: AmM, Automated Mathematician.
And a second publication called Eurisko E-U-R-I-S-K-O.
Those were projects in which he did variation and selection.
And he imitated chromosomes by having
strings of simple operations which were usually
things like adding and subtracting
and conditional jump and so forth.
But there are several bugs with organic evolution.
And the most serious one, which is that evolution
doesn't remember what killed the losers.
So there's no record in the genes of the mutations
which were lethal.
And in fact, it's almost the opposite.
I'm told that in the human genome--
I believe, is it still 90% doesn't do anything?
Some large fraction?
AUDIENCE: Someone who [INAUDIBLE] do something,
actually.
MARVIN MINSKY: Well, they once did presumably.
About 90% of the human genome and a lot of other animals
is not transcribed into proteins.
And a fair amount of it is old inactive viruses.
So it has, you know, maybe 90% of some really deadly virus
that got incorporated into the genome and gets copied.
So the big bug in evolution, to me,
is that if you're going to build a system that's
going to try to develop a new kind of program
by trial and error, the standard approach is to imitate Darwin.
And you mutate these programs, you give them a test,
and you then copy the programs that pass the test
and repeat the cycle.
So what happens is you collect--
because you're mutating them as you go along,
you're collecting genes that help solve problems.
But you're not collecting information
about genes that make the animal worse
or make it fail to solve problems.
So this is true of all of evolution, as far as I can see,
that there's no record kept of the worst
things that can happen.
And so every lethal mutation eventually kills someone.
A lethal mutation is one--
you know, you have two copies of every gene, one from a mother
and one from a father.
And if you get two copies of the same gene--
and most genes have--
a lot of genes are recessive in the sense
that, unless you get two of them, they're not expressed.
If you have a lethal recessive gene,
that usually means that you can have one of that gene
and you're not sick.
But if you have two of them, it eventually kills you.
And it might kill you before birth,
so you don't even get an embryo.
Or it might kill you when you're 40 years old,
as in that horrible Huntington's disease, where you
can carry one and not suffer.
But if you get two, it kills you in middle age which
is very expensive for society.
Anyway, there's no record.
What you want to do is, for each problem solver
that doesn't work, you want your evolution program
to see why it doesn't work and not make that kind of gene
again or whatever was responsible for it.
So that's a big bug in Darwinian evolution.
And the interest in fact is that every lethal recessive
gene will eventually, on the average, kill someone.
This is not a well-known.
You see the arithmetic?
Because it has to wait till there are two of them, and then
it kills that person.
And if you calculate the probabilities
that there's a half chance of getting each of them
in each generation, the math shows that eventually there's
one premature death for each recessive gene.
It's kind of funny.
So it would be nice if we had some way to clean them
up once and for all.
And then everybody would be a lot healthier.
I bet, within the next 20 or 30 years,
we'll see some project which is to get rid of--
just take somebody's genome, sweep out
all the lethal recessives, and get rid
of 100 diseases or more.
And suddenly, everybody will live
to be 150 years instead of 100.
Something like that ought to happen.
AUDIENCE: There's a theory as to why
recessive genes stay in the population
despite killing off people.
And there are some genes for which
it seems to be the case that, you know,
when you get two recessive genes, you die.
But having the heterozygous population
gives you some benefit by giving benefit
against a different disease.
And that's why it exists.
So just getting rid of all the recessive lethal genes
might cause problems.
MARVIN MINSKY: Wow, I hadn't thought of that.
Are there some examples?
AUDIENCE: Oh, yeah.
Malaria.
AUDIENCE: Yeah, sickle cell anemia.
Malaria, so if you have--
you have sickle cell, you cannot get malaria.
AUDIENCE: If you're heterozygous for the sickle cell disease,
[INAUDIBLE]
MARVIN MINSKY: But that's not very beneficial,
because you usually die when you're around 40.
AUDIENCE: No, no, no.
If you're heterozygous for sickle cell.
MARVIN MINSKY: Oh.
AUDIENCE: Then you don't have sickle cell disease,
but you have benefits against malaria.
MARVIN MINSKY: Oh, I didn't know that.
AUDIENCE: The best example commonly given
in all biology classes.
But I'm sure there must be other examples.
MARVIN MINSKY: I never took a biology--
that's good.
So we could probably find one that--
we just have to tailor it a little bit.
Yeah, so the mosquitoes don't like it?
Is that what it is?
AUDIENCE: It's just bad enough blood
that the mosquitoes will ignore you,
but not bad enough that you die.
MARVIN MINSKY: Does it keep the mosquito from biting you?
Or does it make the mosquito sick or what?
AUDIENCE: [INAUDIBLE].
MARVIN MINSKY: It's just in-- yeah.
AUDIENCE: Yeah.
Some stuff I've read about viruses,
you have people changing their theory about viruses.
And one thing that could maybe-- in some sense,
we're symbiotic with viruses, in some sense [INAUDIBLE]..
But like you say, the jump comes at the genome.
It may be process that takes advantage of that.
So one thought is maybe the viruses
are the things that [INAUDIBLE] the losers,
remember why losers lost.
MARVIN MINSKY: That's a good point.
There are lots of things we don't know and wrongly believe.
With this synthetic life, there are
two groups starting to make--
maybe more.
There are probably some secret groups
trying to make them, too.
AUDIENCE: Also in some sense, the bacteria
that live in the human body weigh
far more than the cells that are really yours
and so forth and so on.
You know, they're starting to think
that the entire genome [INAUDIBLE] bacteria colonize
you are also part of that equation in some way.
So, you know, it could be that some of the genetic information
in evolution is not kept in your own genome
but are kept in all the organisms that are--
that live in the human [INAUDIBLE]..
MARVIN MINSKY: Yeah, it's--
AUDIENCE: Is there [INAUDIBLE]?
AUDIENCE: Yes, there is.
That somebody is trying to sequence the--
MARVIN MINSKY: Bacteria [INAUDIBLE]??
AUDIENCE: Yeah, [INAUDIBLE].
Do you know what that's called?
MARVIN MINSKY: How many do you think--
AUDIENCE: He's trying to sequence
every genome of everything that lives in your gut.
MARVIN MINSKY: Yeah, how many--
I understand there are more bacterial cells
than somatic cells by a factor of 100 or something.
Because bacteria are so small.
But how many different bacteria infest a person?
Is it hundreds or tens or thousands?
AUDIENCE: I guess that's what we're trying to find out.
MARVIN MINSKY: Yeah.
AUDIENCE: So when you say like, in evolution, it
would be nice if we had everything that went bad--
and then you said-- and then we could
see what went wrong, right?
But isn't it that what we're doing evolution [INAUDIBLE]
we don't have a clear idea of how someone doesn't
have to solve the problem?
So even though we have the information of the solver,
that they don't work.
Like we-- I feel like if we had a way to know what went wrong,
then we would already have information enough
to know what is right, you know?
MARVIN MINSKY: Oh, yes.
AUDIENCE: So how do you decide [INAUDIBLE]??
MARVIN MINSKY: I was thinking of a fairly high level system.
Because when Lenat or Larry Fogel--
was another one of these learning by evolution systems.
I'm not suggesting that we could make a simple evolution
simulation that would think of reasons why it failed.
So this would be a high level one,
if you're writing a big AI program.
For example, when you learn arithmetic, after awhile,
you learn not to divide by 0.
So what do we call negative knowledge?
What are the commonsense things?
Is there a name for the things you should never do?
AUDIENCE: Well, when people talk about--
you know, they-- search tree as a possibility,
you prune the trees.
MARVIN MINSKY: You prune the tree.
But, you know, we have rule based systems.
And they got very popular around 1980
and wiped out most of symbolic AI for a long time.
But there aren't any rules that say, don't do x.
Are they ever?
Do they have some?
AUDIENCE: Some experts [INAUDIBLE]
MARVIN MINSKY: So the question is, when are they invoked?
In a certain situation, turn off this bank of rules, maybe.
So I'm not suggesting that you can make
a very simple system do that.
Because, in fact, figuring out why this mutation was bad
might be a very hard problem.
But as you build smarter and smarter ones,
then you want to put--
well, what I called critics.
Or I don't know.
Freud had a name for them.
At some point, you want to have prohibited actions
and in Sigmund Freud's early model of psychology,
there was a place for things that you
would go away from or not do and these sensors, he called them.
And they never appeared in the main line of psychology.
When they threw out Freud, who had a few bad ideas,
they threw out all these good ideas, practically.
AUDIENCE: You might be pleased to hear that some of the monkey
neuroscientists are starting to find some [? critics. ?]
It's pretty handwave-y stuff as of now.
But at least they're thinking about it.
There's certain tasks where the monkey is cued to pay attention
to one thing or another.
Usually, if any, it's color versus orientation.
And when they found is that orientation has dominance.
And so when a cue is telling the monkey
that they have to ignore the orientation
and pay attention to color, the part--
those neurons which are responsible for looking
at the orientation are being actively inhibited
by another group of neurons, which they're now
calling a [? critic. ?]
MARVIN MINSKY: Are these in the same--
or is it a little nearby nucleus that's--
AUDIENCE: Nearly nucleus.
MARVIN MINSKY: That's nice.
So that would be a good place for--
is there a word for negative knowledge?
AUDIENCE: They call negative knowledge, I guess.
MARVIN MINSKY: It would have too many different senses.
Advice not to take.
There's some--
AUDIENCE: So this question would imply that there
is a metric for intelligence.
But is there a limit to intelligence?
As in, is it possible to say one day
we have artifical intelligence that is the most
intelligent possible thing?
MARVIN MINSKY: Seems unlikely, because presumably the survival
value of a particular system depends
on the world the thing is in.
It might be that for all really--
for all worlds above a certain complexity,
maybe there are some overall strategies
that are universally better than others or something.
But measuring intelligence doesn't make any sense.
Because you'd-- I think you have to go the way Howard Gardner
did and say, well, there's social intelligence and--
I don't know.
Can anybody rattle off his list?
What are his eight ways of thinking?
Just look up Howard Gardner.
So the amount of intelligence is--
clearly, it's a useful, intuitive idea
that for any particular machine you
could imagine another one that can do everything
that one can do and more.
But you're going to get a lattice, not an ordered thing.
And the lattice won't--
at some point, it will start getting inconsistent.
And this will be better than that one for this and not that.
And--
AUDIENCE: Gardner had about nine different types
of intelligences, according to his Wikipedia article, logical,
mathematical, spatial, linguistic, bodily,
kinesthetic, musical, interpersonal, intrapersonal,
naturalistic, and existential.
MARVIN MINSKY: There you go.
And if you take any one of those--
when I was a mathematician, I was really good
at topology but not at algebra.
And at some point, that stopped me
from being even better at topology.
So if you take any one of those--
I think Howard wants to keep it simple,
but I wonder if he has a sub psychologist who has chopped up
mathematics into the right--
what are the right eight?
[INAUDIBLE]?
How many of you are bad at some kind
of mathematics and know why?
AUDIENCE: I'm really bad at Fourier series,
just because I don't like them.
[LAUGHTER]
MARVIN MINSKY: I wonder what Newton
would have thought about them.
In my PhD thesis, I had a--
it was mostly about neural networks.
And there were some people who thought
that you could put information-- if you had a bunch of neurons
in a circle, then you could put in a string of signals
of different durations and store the bits
in this circular thing.
Because in World War II, there were no digital computer
memories.
But there were some computer-like things
that stored signals in a tube of mercury
with a speaker and a microphone, and it
was possible to store a lot of information
in sort of analog bits for a long time.
But what you do is you have something
that would regenerate them and synchronize them with the clock
each time around.
And I was trying to prove a theorem
that, given what we know about the delay in neurons,
if you stimulate a neuron very strongly,
it reacts more quickly than if you just stimulate
a little bit above threshold.
Then it takes a longer time to fire.
So I was trying to prove that in neural networks,
in something like a human brain, you
couldn't store a lot of information in circular loops.
And I kept having trouble proving that.
And I ran into John Nash who was another student
a bit ahead of me.
And he listened to me for a minute
and he said, expanded in Fourier series.
And after about two days, I figured out
what he probably meant.
And I proved this nice theorem, and it turned out it also--
and it had been discovered a long time
ago it was called a Lipschitz condition.
And if you have a certain condition like this,
then the information will go away.
But if you don't, you can keep the information around
for a very long time.
So in this case, the proof showed
that you couldn't store--
unless you had a renormalizer or a clock somewhere,
you couldn't store circular information
in a mammalian brain very well.
It's a nice example of something where
one person had a different way of looking at it.
Nash was pretty famous for his results in game theory,
but I suspect he might have been responsible
for 5 or 10 other things that he--
Norbert Wiener had this habit of talking to a student.
He says, what are you working on?
And the student would explain it.
And Wiener said, oh, well you just do this.
And I was present at a meeting of the--
I was in the math department where
they had a meeting about who would tell Wiener not
to do that anymore.
[LAUGHTER]
Some student had-- oh well, it's a true story.
I wonder what else I've forgotten.
Yes?
AUDIENCE: I'm curious.
You say this could be updated with a clock.
Is there any evidence to suggest that biologically one could
or could not construct a clock?
MARVIN MINSKY: There are lots of clocks.
I suspect that if I had thought about it more I would have--
because I'm talking the middle 1950s, and people
knew a lot about brainwaves.
And, you know, there are three or four
fairly large synchronous activities in the brain.
And I don't think anybody knows much about what they're for.
Do you know?
Have you heard any rumors?
What is the delta wave for?
AUDIENCE: Well, actually, the monkey experiment
I was just talking about relies on the assumption
that the beta wave is for suppression
and the alpha wave is for activation.
And I think people are still sort
of debating about the delta and theta waves.
MARVIN MINSKY: Mhm.
The alpha wave-- what's the 10% in--
I think that's the big one.
And it goes away when you are thinking hard.
That is, if you're not focusing much on anything,
then it's a fairly nice regular 10 per second.
And if anything gets your attention and you focus on it,
then the alpha wave pretty much gets noisy and disappears.
I think.
I don't know what the other what the others do.
Is that correlated with any event?
AUDIENCE: Obviously, the usual room shutting down.
MARVIN MINSKY: I brought all this,
but I decided not to use it anyway.
AUDIENCE: I think it's correlated
with a certain period of time after the signal
from the computer stops changing.
MARVIN MINSKY: Oh.
You mean it might wake up again?
AUDIENCE: No.
It shuts down at the same time every class.
AUDIENCE: It's not always the same time.
MARVIN MINSKY: It's usually at 8:30.
AUDIENCE: And he stopped using the [INAUDIBLE]..
Correlation implies causation.
MARVIN MINSKY: I wonder if Steve Jobs had--
this little thing has two batteries.
And at one end, there's a dot.
And the other end, there's a slot
which is for a screwdriver.
But it's also the minus sign of the battery.
It could have been plus, but--
but-- what's that?
AUDIENCE: It's probably wired so you can put a coin in.
MARVIN MINSKY: Any coin, actually.
AUDIENCE: Yeah, so you don't actually need a screwdriver.
MARVIN MINSKY: I don't have a coin.
[LAUGHTER]
AUDIENCE: But you do have a screwdriver, right?
MARVIN MINSKY: Of course.
It's usually one.
It's somewhere.
No tips.
[LAUGHTER]
Good question.
Yeah?
AUDIENCE: Do you think artificial intelligence
will ever be elected as a leader of a government?
MARVIN MINSKY: In most science fiction stories,
it doesn't give us a choice.
[LAUGHTER]
The Moon Is a Harsh Mistress.
That was Robert Heinlein, wasn't it?
It had a really smart computer emerge
from the internet on the moon.
Yeah?
AUDIENCE: Yes.
I was curious whether you had the idas as to how we attempt
to determine the representations of information
that either people or animals use to solve problems.
Clearly this is a critical problem with intelligence.
And lots of AIs got into various ways
of representing information.
But it would be really interesting to see
how is that measuring--
has ideas of how that could be tested.
MARVIN MINSKY: That's wonderful.
What are the cognitive psychologists
doing about representations?
Have you run across any?
AUDIENCE: They studied reaction times, [INAUDIBLE]
way [INAUDIBLE].
They don't have very good ways of setting [INAUDIBLE]
MARVIN MINSKY: Yeah.
Rule-based systems are still the--
I haven't read a modern cognitive
psychology-- has anybody read a modern cognitive psychology
book?
Do they have trance frames or scripts?
What's happening in that realm?
Try to remember what--
I guess I've never seen any Winston-like diagrams
in anything but AI.
But there must be some somewhere.
It's 1970.
Who has taken a psychology course?
Is that true?
What's in it?
AUDIENCE: They talk about babies a lot nowadays.
[LAUGHTER]
MARVIN MINSKY: Well, there's a little industry of trying
to show that Piaget was wrong.
Is that what they say about babies?
When do babies get conservation of quantity or something?
AUDIENCE: Yeah, basically just go
throughout the whole development stage and explain that.
But I have not seen Winston and [INAUDIBLE] predicts.
MARVIN MINSKY: Well, there is a problem with the low resolution
of brain scanning.
So if you can only tell when a square centimeter of brain
is more active than another part,
then it's hard to imagine how you
could look for the representation of an arch
as a block on top of two others.
But you should be able to make a hypothesis about representation
and then design an experiment in which you show
a picture of an arch and then quickly show a picture where
there's a little space between them,
so it's not being supported by--
and blink those on and off and see
if different kinds of changes in the representation
cause different kinds of brain activity.
But I suspect that most experiments
on watching brain activity are from giving a stimulus and not
a pair of quickly changing ones maybe.
So you want to find what parts of the brain
are activated when a certain kind of difference appears.
And it shouldn't be hard to make such experiments,
but my impression is that they don't do that so much as, you
show a certain face for a couple of seconds,
and then you show something else,
and you look to see if the activity moves somewhere.
But if your resolution is low, maybe you
should be putting in stimuli that change,
so that you're finding the response to the changes.
It's just a--
AUDIENCE: One of the problems is that there
is a delay with the kind of brainwaves you can get.
Like you can get more real-time reactions, like fMRI.
MARVIN MINSKY: Yeah, it usually takes several seconds
to get anything.
You have to do--
AUDIENCE: [INAUDIBLE]
MARVIN MINSKY: You'd have to repeat it many times,
and I think it still takes several seconds to get
any information, doesn't it?
What's the-- the first brainwave experiments were in the late--
in the 1940s.
And that Englishman Grey Walter, who also made that first robot
turtle and things like that.
I was just reading some of the--
some papers he wrote in the middle 1950s.
They're not very illuminating about AI,
but they show you what some people
were thinking in the days before computer science.
Yeah?
AUDIENCE: When you talk about your book about [INAUDIBLE]
and big machines that accumulate huge libraries
of statistical data.
Use that-- they cannot develop much cleverness because they
don't have this--
sorry, the-- can't--
what's-- because they don't have higher reflective levels.
What are these higher reflective levels?
MARVIN MINSKY: Well, that's thinking about what
you were thinking a minute ago.
You know, you think something and then you
say, that was a bad idea, why did I get that?
Or now I realize I didn't understand something.
I've wasted five minutes because--
reflective thinking is just thinking
about your recent thoughts.
Maybe all thinking is--
any coherent train of thinking-- each thought
is something about the previous thought,
but it doesn't have the word "I" in it, you know?
You say, why did I waste so much time?
Why did I focus on this rather than that?
What did that person say?
Maybe I missed the point.
Maybe most of your thinking is, what did I just think?
Maybe I missed the point.
Yeah?
AUDIENCE: So here we often talk a lot
about the economy science in psychology.
And I'm curious, how important do
you think [INAUDIBLE] science and psychology are
to the field of AI and whether the right way of trying
to build intelligent machines and understand intelligence
is through understanding what we've already seen.
Or it's playing around with computers
and trying to make systems that solve
the problems we want to solve.
MARVIN MINSKY: I'm glad you asked that, because I don't
think it's very important.
Because I think we all--
we've got to the point where we know that people solve
problems, and we all know how to think about how
we solve some problems.
We don't know the details of how we did it, but I think--
you know, if you look at what's been done in AI,
it's more than clear enough where the present system
stopped and where they fail.
And we keep thinking of ways to fix them,
and we get sidetracked.
Because that's-- you get some idea and it's too hard
to program, and somebody says, use C++ and somebody else says,
why did you go back to Lisp and--
And I guess my answer is, I don't
think we need, desperately, to know more about psychology.
Because we already have programs that are pretty good at things
and we can see where they get stuck.
But it would be nice if there were a community
out there helping us.
Because the AI groups are all alone,
and they don't communicate very well with each other,
and they're not very well supported.
But I bet as long as we make machines smarter,
the psychologists will pay more attention
and they'll come back and tell us better things.
And eventually, they'll be a real cognitive science.
Sort of like physics.
Physics got very well with Newton and Galileo and quantum
mechanics.
But now they have a great community.
And when some serious problem comes up, somebody--
spend a billion dollars for a new accelerator or something.
There's nothing like that in AI.
If you say, why did the Newell-Simon general problem
solver get stuck on the missionary in cannibals?
Somebody used to say, well, here's a billion dollars.
I know it's not enough, but maybe you
can make it a little smarter.
Nobody's offering this.
AUDIENCE: [INAUDIBLE].
Somewhat related question.
So first, since AI is mostly an engineering discipline,
it's a question of, how can we make
machines to solve these problems with intelligence?
Do you think this is going to lead to a better
understanding of intelligence?
And how important do you think that is to this more I guess,
mostly scientific but also slightly philosophical
question?
MARVIN MINSKY: I think it's just an engineering question.
There just isn't a way to get enough bright people
to compete with each other to make better AI systems.
It's-- anybody have a theory?
You see, I'm speaking from the point of view,
feeling that there hasn't been much progress in recent years.
And maybe I'm wrong and there's a lot of great stuff
just ready to be exploited.
But I don't see it.
AUDIENCE: I think we're kind of in a spinning [INAUDIBLE]
of sorts where people are doing a lot of the work in terms of,
for instance, tuning the parameters
and choosing machinery approximations in order
to solve problems that there are incentives out there to solve.
And in principle, if we had AI that was good,
AI that would do that work instead of programmers
having to tune parameters and figure out which algorithms
are good for different problems.
But as of now, the way the incentives are structured,
it's going to take a big energy push to sort of get over
the hump of actually creating the infrastructure that's
necessary for that stuff to happen automatically.
MARVIN MINSKY: Yeah, there are AI groups.
There are a few people at Georgia Tech and Carnegie
Mellon.
Although, my impression is that they're
mostly playing robot soccer or something.
So a lot of the people who are empowered to do the right thing
are--
or you look at Stanford.
It's wonderful to make these self-driving cars.
But I don't think a single thing has been learned from that.
Maybe a little has been learned from the Watson thing, but--
AUDIENCE: They won't give out their source code.
MARVIN MINSKY: Right.
And if they did, I think they could
read The Society of Mind that says,
have a lot of different methods and find some way
to integrate them.
What's missing in The Society of Mind
is better ideas on how to integrate them.
And Watson might have some.
But on the other hand, it might not.
Maybe if it can end up with an answer that's one word,
like a person or a sport, then it's done.
And so it may be that we know it's at the lower levels.
And we don't know what's at the higher levels,
and maybe it's no good.
On the other hand, maybe there are 10 very important ideas
there, and you'd have to read that long paper
and try to guess what they were.
Do we have a spy in there?
Are they telling us something?
AUDIENCE: I get little bits and pieces back at the end.
I think it is kind of--
you know, the good news about that is it
has made some progress, and it is kind of a society of models.
And they have some supervisory processes
to try to figure out which--
actually, the most important thing
is to try to figure out which methods are good for which
kind of questions.
MARVIN MINSKY: That would be good.
So they might have some good critics
and selectors-like things.
AUDIENCE: Yeah.
So there's some of that, I think, in there.
I don't think there are a lot of very brand new techniques,
but I think there's probably some of that, yeah.
MARVIN MINSKY: They fired their other AI group,
but I don't think it was getting very far either.
You know the one I mean, the Eric Mueller and--
no he moved.
AUDIENCE: He worked on Watson.
MARVIN MINSKY: No, I mean Doug Riecken, Riecken's group.
It was doing more mathematical AI than, I think, heuristic AI.
Any other company doing anything?
What are the common sense groups in Korea and places like that?
AUDIENCE: Well, I'll point it out in December
when I go there.
MARVIN MINSKY: Henry's going to visit some of them?
The mysterious East.
Yes?
AUDIENCE: So there has been, since a long time ago,
from [INAUDIBLE] there has been machines
that are trying to build a reflective [INAUDIBLE]..
There are critics.
And even though the idea died out in the '80s,
but then there's still some maching, like maybe Watson,
has critics.
But the reflective layer, I feel like it
does a lot of different things.
So what do you think is missing from that layer
that no project has [INAUDIBLE]?
MARVIN MINSKY: I'm not sure what you're asking.
But there is Pat Winston's group working on stories,
and my impression is that that's making definite progress.
And if he can integrate with Henry Lieberman's kind
of large, commonsense knowledge base,
maybe something great will happen.
But progress is a little bit slow.
Gerry Sussman is still full of ideas,
but he keeps teaching courses in physics.
[LAUGHTER]
And he's out there fixing telescopes,
and he's absolutely a prodigy.
And now he's working on this theory of propagators,
which he claims is relevant to AI,
and I don't understand it yet.
But--
AUDIENCE: It's good.
MARVIN MINSKY: What?
AUDIENCE: [INAUDIBLE].
MARVIN MINSKY: I'd like to see it
solve some interesting problem.
But-- so we have a lot of resources here.
But if you look at the world as a whole--
AUDIENCE: Yeah, for example, you talked
about how we should combine [INAUDIBLE] group
with the [INAUDIBLE] knowledge base.
So I feel like doing that, we need
some newly invented machinery.
MARVIN MINSKY: Yeah, to what extent is--
AUDIENCE: Well, if you would like to work on it,
please come see me afterwards/
MARVIN MINSKY: It's a very lively group.
What's happening to Lenat's group?
Is he just hiding or is he--
AUDIENCE: No, I think Doug Lenat is on a side project.
And it's been steadily growing.
And I think one thing--
so what was-- he had a very interesting article
recently about using it for common sense
for medical queries.
So the Watson guys said that, you know,
they want to apply Watson to medicine.
But I think Lenat had a really good article about applying it
to medical queries.
It was things like--
you know, so the doctors would ask things like,
which operations, for some disease or something,
have complications?
And the system would have them say,
what's a complication, right?
And the complication is when things don't go right.
So having a drug reaction be a complication.
Leaving a scalpel in a patient could be a complication.
So you have to understand some of the ideas of--
you know, common sense ideas of what might be a complication
and what might cause trouble and those kind of things.
And I thought that was a very nice system at the Cleveland
Clinic.
And the doctors loved it, and they wrote about it
in [INAUDIBLE] magazine.
I thought that was a real success.
MARVIN MINSKY: Oh, I haven't seen that.
Dr. Lenat.
AUDIENCE: I mean, the problem is that the reason that you
haven't heard a lot of applications for so long
is because were funded, you know,
for decades by three letter agencies in the government.
And they did--
I think they did actually quite good work for them.
Because otherwise, the program would
have continued for 25 years.
But the problem is, you know, when they do something good
for the secret agencies, nobody else finds out
about it [INAUDIBLE].
MARVIN MINSKY: I have a great story about that,
which is almost unbelievable.
Which is, I was at a meeting with John Glenn at-- this
was a long time ago, when it was just starting.
And this was in a building a block from the White House,
and it had all these people from some agency
about whether AI could help them with their problems.
And somebody pulled out some slides and was
about to give a lecture.
But the shelf that had the projector on it had hinges,
and all the screws were missing on one side
and it fell down like this.
And they fussed for a long time and couldn't
get the projector to line up.
And then I had this thing.
And I took three screws out of--
it had three hinges and I took three screws out of here
and put them in here and here and here.
And then the shelf stayed up and the show went on.
You know, it's like the joke about the--
anyway.
So they were astounded because I actually
fixed this stupid thing.
And I said, well, why didn't you?
And they said, we asked maintenance three weeks ago
and they never got around to it.
And I said, this is the agency?
And they said yes.
And then they said, but why did you have that thing with you?
[LAUGHTER]
AUDIENCE: That's OK.
You never get past the metal detector.
MARVIN MINSKY: When I was a kid, I heard some story--
oh, never mind.
About when a car wheel rolls off,
you take one screw from each of the other three.
So I was doing exactly that, and these agency people had never
thought of doing it themselves.
So what does it mean when you have a government run by people
who can't fix this hinge?
I once met a freshman who didn't know which way to turn a screw.
At MIT.
How many of you have to try both--
[INTERPOSING VOICES]
AUDIENCE: [INAUDIBLE] screw.
MARVIN MINSKY: The left hand.
Some rule, right.
AUDIENCE: You're not doing the [INAUDIBLE] anymore.
AUDIENCE: Well, if you're screwing into weird angles,
like [INAUDIBLE] pieces and stuff.
MARVIN MINSKY: Yeah, sometimes.
AUDIENCE: [INAUDIBLE].
MARVIN MINSKY: That's right.
Enough stories.
AUDIENCE: Are you sure?
[LAUGHTER]
MARVIN MINSKY: So has he--
oh, can you send us a pointer to that paper?
Lenat's?
AUDIENCE: Oh yeah, sure.
MARVIN MINSKY: That would be nice.
He's one of the great pioneers of AI.
AUDIENCE: I guess I have a question
about like extracting a piece of knowledge from experience.
I feel like this is something that I think we reflectively
[INAUDIBLE] all layers.
But maybe-- it's probably also this-- probably the
reflective layer.
So how do you think it does that?
MARVIN MINSKY: How do you retrieve your knowledge?
AUDIENCE: How do you turn an experience into a piece of--
a rule?
MARVIN MINSKY: How do you learn from an experience?
AUDIENCE: Yeah.
MARVIN MINSKY: You do something and then you get some knowledge
and where do you put it?
AUDIENCE: How do [INAUDIBLE]?
How general [? do you need ?] it?
Or do you just try it?
Do you have to group a lot of experiences
and then results [INAUDIBLE]
MARVIN MINSKY: If we could answer that,
we could all quit and go home.
You're asking the central problem of,
how do you learn from experience and later retrieve
how you learned?
I just can't think of any way to answer
that except write a whole book and then have everybody
find out what's wrong with it.
Yeah, science is making the best mistakes.
If you make a really good mistake
then somebody will fix it and you'll get progress.
If you make a silly mistake, then nothing is gained.
AUDIENCE: I think we should have the surgeon [INAUDIBLE]
make better mistakes.
MARVIN MINSKY: Well, how do you decide what to try?
Yeah.
That's my complaint about the probabilistic methods.
Because if there are a lot of--
well, I talked about it the other day.
If there are a lot of different aspects of the situation,
like 100, then there's 2 to the 100th conditional probabilities
to think about.
And so probabilistic learning machines
work wonderfully well on small problems
where the search trees aren't too big.
But they don't-- but the hard problem is what to do when
there is a lot of different factors and you don't know
which are important.
And in lots of situations, just first order correlations,
there are 100 factors, and you just
look at the probabilities of each of them.
And then there's 10,000--
5,000 pairs of things.
And you look at the 5,000 joint conditional probabilities
of two things, and maybe five of them pop up,
and you've only got five things to look at.
And that's where that kind of AI system works.
And it's become immensely popular.
And the trouble is, it'll never get smarter.
Because if you have to look five steps ahead,
then instead of 10 possibilities,
you have 100,000.
And anyway, my concern is that there
are quite a lot of millions of dollars going into AI research.
But most of it is going into dead ends.
So it's not as though there were--
maybe there is enough money, but it's
going to the easy problems instead of the hard ones.
Who has an easy question?
AUDIENCE: So lots of our people like
to play games and procrastinate.
Do you think artificial intelligence will also
play games and procrastinate?
MARVIN MINSKY: Well, there's the opposite question.
I got a message from somebody I don't remember who--
I had complained that nobody's been able to get Push Sing's AI
program to work.
And somebody suggested-- what?
Yeah.
And somebody suggested that--
I forget the name.
There's some group of people who like problems.
And I can't remember what--
it's just a bunch of people out on the web
who like to solve programming problems.
And this person suggested sending the code
to that group of a couple of thousand people
and maybe they would self organize
to try to figure out how it works and fix it.
So do you think that would--
could that work?
AUDIENCE: Yeah.
MARVIN MINSKY: We have a big bunch of code.
It's partly commented.
Could we get 1,000 really aimless hackers out there
with lots of ability to--
so maybe I'll try it.
Sort of--
AUDIENCE: [INAUDIBLE] is that they
might have their own code or their own sections.
MARVIN MINSKY: Well, if they're self organizing enough.
I mean, if an individual tries to fix it, that's fine.
But maybe these people know how to work together.
So they could chop it up and talk to each other and agree.
It doesn't have to be the same as Push's.
It just has--
I don't know if you've seen the movie.
I'll bring it next time.
You had a robot coming to try to screw the legs onto a table,
but the robot has only one hand.
So there's another robot over there.
The first one says, help.
And the other one figures out just enough
to come over and pick up the other end of the table.
So as far as I know, this thesis only worked out one example.
AUDIENCE: Yeah.
Actually it was-- the tricky part
in that was that when the other robot said [INAUDIBLE]
the other robots looks away.
Well, you know this [INAUDIBLE] So the other one [INAUDIBLE]
the other robot [INAUDIBLE] The first robot is
trying to take the table apart.
MARVIN MINSKY: Yes.
AUDIENCE: So then you have the second robot doing that.
And then you have to correct it, no.
Then you go back and show it [INAUDIBLE] fix it.
MARVIN MINSKY: Right.
And the first robot just says no.
Which the other robot has to be very
stupid to be able to interpret that as exactly one thing
not to do.
If it were smarter, it probably wouldn't work.
But anyway, I'll bring the movie in.
Have you looked at the code?
AUDIENCE: [INAUDIBLE] debug it myself.
MARVIN MINSKY: Yeah.
It looks pretty horrid.
[INTERPOSING VOICES]
MARVIN MINSKY: My favorite story, which I think is true,
is that Slagle's program for doing integration for--
yeah, was about five pages of lisp.
And Joel Moses said that it took him several weeks
to figure out--
because Slagle was blind and had to program in Braille.
So Joel said that he made the most intricate convoluted
expressions so that he wouldn't have to type so much.
And then Joel-- so that was the first.
I had written a program that differentiated
algebraic expressions.
And that was a great breakthrough
although it was completely trivial.
Namely, I just put in the letter D.
And if it saw an expression x times y,
it would say xdy times dy plus y times dx.
And there are only four or five such rules.
Then just sweep through until it had
this big long expression, and that
turned out to be the derivative.
But then Joel wrote--
the trouble is it was too long.
And then Moses wrote something to simplify it.
And then Slagle wrote a simple integration program.
And then Moses wrote a really complicated one.
And eventually, a couple of other mathematicians
studied that and extended it and worked out a theory of--
for the final integration program,
that could integrate any expression
that had an integral in close to algebraic form.
Which means a function of exponents, sines and cosines
and polynomials.
And the result of that was a sort
of nice story, which is that the American mathematical society
had a big suite of rooms in Providence, their headquarters,
where they had collected all the integrals that
were known for hundreds of years,
ever since Newton did the first ones.
And there were rooms full--
So every time somebody found a new integral,
they would write it up and send it
to the American Math Society.
And it would get cataloged there.
And they had raised funds for--
it was called the Bateman Manuscript Copy.
And there was a fund for organizing all this data.
And the minute the program came out,
the Bateman Manuscript Project was terminated and closed.
[LAUGHTER]
Because-- and I think Maxima had the solution in it.
And Mathematica is the sort of big successor to that.
But it was a nice piece of history spread
over about five or six years.
And I don't know that anybody works on that anymore.
We had a couple of PhD theses starting
of trying to solve differential equations,
and they didn't get very far.
That's probably an important--
it looks like Lemelson is over.
[INTERPOSING VOICES]
AUDIENCE: It's the 100K.
AUDIENCE: Elevator Pitch.
MARVIN MINSKY: You think they actually awarded one?
AUDIENCE: Yeah!
AUDIENCE: Probably.
[INTERPOSING VOICES]
AUDIENCE: Well, they have 100K.
I think the Elevator Pitch contest as separate things.
AUDIENCE: Yeah, they-- oh.
There's three parts of it.
That's the first one.
MARVIN MINSKY: Well, any more--
oh, way back.
AUDIENCE: Do you think there's ever
going to be a way to crowd source the AI research at all?
MARVIN MINSKY: That's what I meant.
That's the expression I was looking
for for fixing the push thesis.
But it would be nice.
AUDIENCE: It wouldn't be a self-organizing thing.
Like someone would have to--
I mean, is that--
I feel like that would not be-- it would be hard for people
to self-organize to do that.
But there were already [INAUDIBLE] structure.
And that minimal piece that everyone
could do for AI research that's already defined.
Do you think AI research is structured in a way that
could never be broken down?
MARVIN MINSKY: Well, don't these crowd things usually
start with some--
they must start with some sort of leader
but then they become self organizing or--
AUDIENCE: I mean, they [? weren't. ?]
Because every participant had a specific-- has
a specific and distinct-- had basically the same small
[INAUDIBLE] And they don't become more complex than that.
I mean because of the community or whatever.
But it doesn't-- the idea of lowering the floor of doing AI
research so that more people can contribute.
MARVIN MINSKY: It's a nice question.
Well, let's think about it.
If we take this PhD thesis code, it
wouldn't be much good to send the whole thing to everyone.
Well, of course it wouldn't cost anything to do that.
But you need somebody to make a first pass at chopping it
up into, say, look at this function didn't work.
Maybe there's some code missing.
So you might need a person or a couple of people
to sort of organize the project.
But once you've got a community, they
might be able to cooperate.
AUDIENCE: [INAUDIBLE] already [INAUDIBLE]
to be able to contribute.
MARVIN MINSKY: They might do without a leader
once the problems became clear enough.
AUDIENCE: [INAUDIBLE].
I would say that, I mean, there are some crowdsourced AI
projects.
Certainly, if you go to Source [INAUDIBLE]
or the restaurant game of That's crowdsourced.
[INAUDIBLE] crowd sourcing [INAUDIBLE]..
But I don't think the crowd sourcing
is really great for problems that enable our creativity.
[INAUDIBLE] commands that are--
projects that are labor intensive.
It's good for [? study ?] at home, that kind of thing.
But for projects that demand a lot of creativity,
it kind of breaks my heart almost.
Because if you look at like the open source Unix,
you know they've done a great job at organizing people
to work on Unix and three versions of Linux
and three versions of Unix.
But on the other hand, the software isn't very innovative.
You know, they just implement 60 versions of the [INAUDIBLE]..
And the Unix interface hasn't changed since the 1960s.
pretty much so.
You know, it's everybody still programming
on terminal windows.
So I think it's--
crowdsourcing is mainly good for projects
that are labor intensive.
I think AI, you would see individual creativity more.
It's like McCarthy said.
We'd see 2.3 Einsteins and 1.7 Manhattan Projects.
That, you know, [INAUDIBLE] it's probably good
for the Manhattan Projects but not for the Einsteins.
MARVIN MINSKY: On the other hand, given we have the movie,
it might be that the problem of getting this code to make
that movie isn't so creative.
You could see-- you can start it up and see where it gets stuck,
and--
it's worth a--
AUDIENCE: [INAUDIBLE] if you have something
along the lines of Watson that incorporates lots
and lots of small programs.
And you can have people contribute small programs
whether they're good or bad.
[INAUDIBLE] figure it out and then [INAUDIBLE]..
AUDIENCE: Well, in some sense, Watson was crowdsourced.
Because it wasn't only developed by that IBM group.
They had Watson collaborators in ISI and CMU and other places.
And they crowdsourced it by getting little research grants
to integrate their part of the research into Watson.
So I think you could argue that actually was crowdsourced.
MARVIN MINSKY: Another thing we haven't tried
is called throwing money at it.
If we-- suppose we got $500,000 or $1,000
and told some programmer, can you get this to work?
AUDIENCE: Well, if you have good enough inspirations
of various small programs that [INAUDIBLE] something,
then I think that part of the problem
is creativity [INAUDIBLE].
Because some parts-- or some [? soft ?] programs could be
more [? stupid. ?] And as long as you have a description
of what that program is doing, then you could have some really
creative program, that's--
that might use those [? stupid ?] programs.
[INTERPOSING VOICES]
AUDIENCE: But you still need that one very creative person
to [INAUDIBLE]
AUDIENCE: But you must be so--
[INAUDIBLE]
AUDIENCE: Yeah.
I'd [INAUDIBLE] World of Warcraft 10
and leak it on the internet.
[LAUGHTER]
MARVIN MINSKY: My grandson was suspended
from Warcraft for three weeks because he hacked the thing
to get a higher priority on something.
[LAUGHTER]
I think he was not--
do you know how old he was?
AUDIENCE: [INAUDIBLE]?
MARVIN MINSKY: Yeah.
No, Miles.
AUDIENCE: No.
I don't think he's played one of those games in a long time.
MARVIN MINSKY: No, I think he was about 10 or 12.
And he had actually managed to get into the thing
and get instant service.
He was very proud of being banned.
[LAUGHTER]
I give up.
Any last request or idea?
Thanks for coming.
5.0 / 5 (0 votes)