4. Question and Answer Session 1
Summary
TLDR在这段对话中,马文·明斯基教授与观众就多个话题进行了深入的讨论。话题涉及了神经科学、人工智能、认知心理学和语言学习等领域。明斯基教授分享了他对大脑功能、神经元连接以及胶质细胞可能的作用的看法。他还讨论了大脑如何处理不同的心理状态,例如否认、讨价还价、沮丧和抑郁,并探讨了动物(如狗)与人类在认知能力上的差异。此外,明斯基教授还对如何评估机器智能提出了见解,包括是否需要机器具备理解能力或仅仅模拟人类行为。整个讨论充满了对人类心智和机器学习的深刻洞察,展示了明斯基教授对这些复杂主题的深入理解。
Takeaways
- 📚 支持MIT开放课程项目(MIT OpenCourseWare)可以继续免费提供高质量的教育资源。
- 🧠 马文·明斯基(Marvin Minsky)讨论了大脑的工作原理,特别是关于神经胶质细胞可能的作用。
- 🤔 明斯基表达了对大脑研究的担忧,认为神经科学界可能过于关注神经元间的化学物质,而忽视了神经网络结构的重要性。
- 🐙 他提出了一个假设,即动物(如章鱼)可能具有复杂的规划能力,这可能与它们记住长序列和语义内容的能力有关。
- 🐕 明斯基对狗的认知能力表示好奇,尤其是它们是否能够进行前瞻性规划和比较不同情境的能力。
- 🤖 他探讨了人工智能的发展,强调了计算机科学在理解人类认知能力方面的作用。
- 🧐 明斯基认为,即使是简单的计算机注册器数量差异,也可能在解决问题的速度上产生指数级的影响。
- 🧬 他提到了关于人类和其他动物智能的比较,包括对狗的认知能力的看法,以及它们是否具有类似于人类的推理能力。
- 📈 明斯基讨论了关于如何评估智能机器的问题,特别是当它们达到或超越人类智能水平时。
- 🌐 他还提到了关于如何使用不同的模型来解决问题,以及人们如何在面对问题时选择思考方式的问题。
- ❓ 最后,明斯基提出了关于如何识别和选择解决问题的最佳数据和方法的问题,这仍然是人工智能和认知心理学中一个未解决的核心问题。
Q & A
MIT开放课程的捐赠和额外材料如何获取?
-要为MIT开放课程捐款或查看额外材料,可以访问ocw.mit.edu。
Marvin Minsky对于大脑工作方式的看法是什么?
-Marvin Minsky不想对大脑如何工作进行推测,因为神经科学领域有大量研究论文,这些论文提出了不同的假设,包括可能是胶质细胞而非神经元在起作用。
神经元的连接数量对它们处理信息的能力有何影响?
-一个典型的神经元有大约10万个连接,这表明神经元内部必须进行非常复杂的处理工作,可能需要其他支持神经元的细胞参与。
神经科学的历史是如何发展的?
-神经科学的历史起初认为所有神经元都是相连的,直到1890年左右才有了清晰的概念,即神经细胞不是以连续网络的形式排列的,而是存在称为突触的小间隙。
Marvin Minsky对于大脑中化学物质的看法是什么?
-Minsky认为,尽管有关化学物质在大脑激活时的作用有很多民间传说,但现代对神经系统的了解表明,神经途径的连接倾向于交替,不是总是,但经常是抑制性的或兴奋性的。
癫痫发作时大脑发生了什么?
-癫痫发作时,大脑的某些部分会因为足够的电和化学活动(主要是电活动)而开始同步发射,这种情况像森林火灾一样蔓延。
Marvin Minsky对于意识的看法是什么?
-Minsky认为意识这个词太模糊,包含了很多不同的含义。他认为意识可能涉及多种不同的系统,不同的动物可能具有不同层次的意识。
如何理解大脑的软件和硬件?
-Minsky认为大脑的硬件是演化过程中形成的,而软件则是我们所学习的,包括大脑的一部分如何调节另一部分。他提出大脑可能存在多个层次的表示,类似于计算机中的寄存器。
Marvin Minsky对于人工智能的看法是什么?
-Minsky认为人工智能的发展应该考虑人类解决问题的不同方式,而不仅仅是模仿人类大脑的工作方式。他还提到了计算机科学对于理解智能的重要性。
Marvin Minsky对于动物智能的看法是什么?
-Minsky认为动物智能可能与人类智能有本质的不同,他提出了动物可能在某些方面(如记忆长序列)比人类更优秀的可能性。
Marvin Minsky对于神经科学未来的预测是什么?
-Minsky预测,由于大脑扫描技术的成本降低和分辨率提高,以及新的化学示踪方法,神经科学在下一代将会非常激动人心。
Marvin Minsky对于心理学和认知科学的未来趋势有何看法?
-Minsky认为可能会有更多的数学心理学出现,并且认知心理学可能会采用更多的计算复杂性理论来理解解决问题所需的过程。
Outlines
📚 支持MIT开放课程与神经科学的讨论
本段落介绍了MIT开放课程的资助信息,Marvin Minsky教授与观众的互动,以及对神经科学和大脑工作的讨论。Minsky教授谦虚地表示不愿对大脑工作方式进行推测,提到了神经元连接的复杂性,并探讨了神经胶质细胞可能的作用。他还回顾了神经科学的发展历程,包括对神经网络的早期认识,以及现代对神经递质和突触的研究。
🧠 大脑结构与记忆问题的解决
Minsky教授讨论了大脑皮层的结构,特别是哺乳动物的大脑皮层,并推测这些结构与思考和规划能力有关。他通过比较不同动物的大脑,包括人类和狗,来探讨认知和智能的层次。此外,还有对如何应对问题和情感反应阶段的讨论,如否认、讨价还价、沮丧和抑郁。
🌙 昼夜节律与大脑活动的切换
本段落探讨了动物如何根据不同活动激活或关闭大脑的某些部分,例如昼夜动物的节律变化如何影响它们的行为。还提到了内部时钟和外部光线对动物行为的影响,以及人类睡眠周期的多样性和可能的睡眠障碍。
🤔 社会心智理论的批评与神经科学的未来
Minsky教授讨论了社会心智理论所面临的批评,并对阿尔茨海默病的可能原因进行了探讨。他提到了大脑的软件和硬件方面,以及进化过程中可能出现的软件缺陷。此外,还对神经科学未来的发展,包括新的大脑扫描技术和合成化学物质的使用进行了预测。
🧐 意识、智能与动物的认知
Minsky教授和观众探讨了意识、智能以及动物是否具有与人类相似的认知能力。他们讨论了计算机模拟人类智能的潜力,以及大脑的神经网络和认知结构。此外,还提到了狗和人类智能的差异,以及如何通过观察动物行为来推断它们的认知能力。
🧬 进化、遗传变化与大脑发展
本段落讨论了遗传变化如何影响大脑的发展,以及进化过程中可能发生的变化。Minsky教授提到了胚胎发育的早期和近期阶段,以及这些变化如何影响后代的大脑结构和功能。他还讨论了大脑的进化,特别是人类大脑的前额皮质的显著发展。
🐕 狗的认知能力与人类智能的比较
Minsky教授和观众讨论了狗的认知能力,包括它们是否能够进行计划和预测未来的能力。他们还比较了狗和人类智能的差异,以及这些差异是否仅仅是计算能力的问题。此外,还探讨了不同动物的智能和语言能力,以及它们如何与人类智能相比较。
🚇 莫斯科的狗与智能行为
本段落讲述了莫斯科的流浪狗如何学会乘坐地铁,并采取不同的策略在市中心乞讨食物。Minsky教授分享了他的狗乘坐公共交通工具的故事,并讨论了动物学习和适应环境的能力。
🤖 人工智能的发展方向
Minsky教授和观众讨论了人工智能的未来发展方向,包括是否应该模仿人类的思维方式,以及如何评估智能机器的智能水平。他们探讨了机器学习和传统算法在解决特定问题时的应用,并讨论了如何通过不同的测试来评估机器智能。
🧵 知识表示与问题解决
本段落讨论了知识表示的不同层次,包括语义网络和中间层次的知识表示。Minsky教授和观众探讨了动物可能拥有的知识表示类型,以及这些表示如何使它们能够执行某些行为。他们还讨论了如何通过观察动物的行为来推断它们的知识表示能力。
📈 心理学研究与人工智能
Minsky教授和观众讨论了心理学研究的当前状态,特别是关于团队合作和问题解决的研究。他们还探讨了Piaget的发展心理学理论,以及这些理论如何影响对人工智能的理解。此外,还提到了人工智能研究的多样性和心理学研究的转变。
👶 婴儿期的智能发展
本段落讨论了婴儿期智能发展的重要性,以及如何通过模拟婴儿期来构建人工智能。Minsky教授提出了关于如何让机器学习的问题,包括是否需要为机器提供类似人类的本能,以及如何通过教育和经验来发展机器的知识表示。
🗣️ 语言学习与婴儿发展
Minsky教授和观众讨论了语言学习在婴儿发展中的作用,以及为什么婴儿在特定的发展阶段开始说话。他们探讨了语言的工程观点,以及如何通过编程来启动机器并构建经验知识。此外,还讨论了人类婴儿与机器学习之间的相似性和差异。
Mindmap
Keywords
💡神经科学
💡人工智能
💡认知心理学
💡大脑皮层
💡神经可塑性
💡突触
💡心理模型
💡语义网络
💡图灵测试
💡进化心理学
💡认知发展
Highlights
MIT OpenCourseWare 提供免费高质量教育资源,支持者的资金帮助其持续运营。
Marvin Minsky 讨论了大脑中胶质细胞可能与思维活动关联的假设。
Minsky 强调了神经元的复杂性,每个神经元可能有高达十万个连接。
他提出大脑功能可能不完全由神经元单独承担,其他支持细胞也可能参与。
Minsky 回顾了神经科学的历史,包括神经元不是连续网络的观点。
他讨论了大脑中化学物质如激素和肾上腺素的作用。
Minsky 描述了大脑中心之间的连接通常是交替的,一些抑制而另一些激发。
他用癫痫发作来类比大脑活动过激时的情况。
Minsky 倾向于在大脑皮层中寻找记忆和问题解决系统的结构。
他指出没有皮层的动物的行为可以通过低级的反应和反射来解释。
Minsky 探讨了人类如何处理失败和挫折,以及情绪反应的阶段性。
他讨论了动物和人类大脑的不同,特别是前额皮层的大小。
Minsky 提出可能有多个系统在大脑中并行工作,处理重要功能。
他提到了不同的精神疾病可能是由于大脑中不同的系统故障造成的。
Minsky 预测,由于新的扫描技术和合成化学物质,神经科学将在未来几年非常激动人心。
他批评了当前的人工智能理论,认为它们没有被证实能够代表哺乳动物大脑的工作方式。
Minsky 讨论了软件在智能中的作用,暗示大脑的学习和功能可能更多地依赖于“软件”而非“硬件”。
他提出了关于意识的讨论,认为狗可能具有与人类不同的意识形式。
Minsky 探讨了计算机科学如何影响我们对能力和智能的理解。
他提出了关于如何评估人工智能是否达到人类智能水平的问题。
Transcripts
The following content is provided under a Creative
Commons license.
Your support will help MIT OpenCourseWare
continue to offer high quality educational resources for free.
To make a donation or to view additional materials
from hundreds of MIT courses, visit MIT OpenCourseWare
at ocw.mit.edu.
MARVIN MINSKY: I presume everyone has an urgent question
to ask.
Maybe I'll have to point to someone.
AUDIENCE: One over there.
MARVIN MINSKY: Oh, good.
AUDIENCE: So [INAUDIBLE] exactly what's said,
but you said that maybe the [INAUDIBLE] lights are
associated to the glial cells.
Is that right?
MARVIN MINSKY: Oh, I don't want to speculate on how
the brain works, because--
[LAUGHTER] because there's this huge community
of neuroscientists who write papers about--
they're very strange papers because they talk about how
maybe it's not the neuron.
And I've just downloaded a long paper
by someone whose name I won't mention about the idea
that a typical neuron has 100,000 connections.
And so something awesomely important
must go on inside the neuron's body.
And it's got all these little fibers and things.
And presumably, if it's dealing with 100,000 signals
or something, then it must be very complicated.
So maybe the neuron isn't smart enough to do that.
So maybe the other cells nearby that support the neurons
and feed them and send chemicals to and fro around there
have something to do with it.
How many of you have read such articles?
It's a very strange community, because--
I think the problem is that history of that science
started first it was generally thought that all
the neurons were connected.
And then around 1890 was the first clear idea
that nerve cells weren't arranged
in a continuous network.
I think it was generally believed that they were all
connected to each other, because as far as you
could tell with the microscopes of the time
it didn't show enough.
And then the hypothesis that the neurons are separate
and there are little gaps, called synapses,
as far as I can tell started around the 1890s.
And from then on, as far as I can see,
neurology and psychology became more and more separate.
And the neurologists got obsessed with chemicals,
hormones, epinephrine, and there are about a dozen chemicals
involved that you can detect when parts of the brain
are activated.
And so a whole bunch of folklore grew up
over about the roles of these chemicals.
And one thought of some chemicals
as inhibitory and excitatory.
And that idea still spreads, although what we know about
the nervous system now-- and I think I mentioned this before--
is that in general if you trace a neural pathway
from one part of the brain to another, what happens
is that the connections tend to alternate, not always,
but frequently.
So that this connection might inhibit this neuron.
And then you look at the output of that neuron,
and that might tend to excite neurons
in the next brain center.
And then most of those cells would tend to inhibit.
I mean, each brain center gets inputs from several others.
And so it's not that a brain center
is excitatory or inhibitory, but the connections
from one brain center to another tend to have this effect.
And that's probably necessary from a systems
dynamic point of view, because if all neurons tended
to either do nothing or excite the next brain center,
then what would happen?
Soon as you got a certain level of excitement,
then more and more brain centers would get activated.
And the whole thing would explode.
And that's more or less what happens
in an epileptic seizure, where if you
get enough electrical and chemical activity of one kind
or another, mostly electrical--
I think, but I don't know--
then whole large parts of the brain
start to fire synchronicity.
And the thing spreads very much like a forest fire.
So that's a long rant.
I guess I've repeated it several times.
But it's hard to communicate with that community,
because they really want to find the secret of thinking
and knowledge in the brain cells,
rather than in the architecture of the interconnections.
So my inclination is to find an intermediate level, such as,
at least in the cortex, which is what distinguishes the--
does it start in mammals?
AUDIENCE: I think so.
MARVIN MINSKY: I think if--
rather than a neurology book, I'm
thinking of Carl Sagan's book, which there's
is a sort of triune theory that's very popular,
which is that the brain consists of three major divisions.
And the-- I forget what the lowest level one is called,
but the middle level is sort of the amphibian and then
the mammalian and--
it's in the mammalian development
that large parts of the brain are cortexed.
And the cortex isn't so much like a tangled neural net.
But it's divided mainly into columns.
And each column, these vertical columns,
tend to have six or seven layers.
I think six is the standard.
And the whole thing is--
what is it about 4 millimeters?
4 or 5 millimeter thick, maybe a little more.
And in each of these columns, there's
major columns, which have about 1,000 neurons.
And one of these columns is made up
maybe 10 or 20 of these mini columns that are 50 or 100
or whatever.
And so my inclination is to suspect that since these
are the animals that think and plan many steps ahead
and do all the sorts of things we take for granted in humans,
that we want to look there for the architecture of memory
and problem-solving systems.
In the animals without cortexes, you
can account for most of their behavior
in terms of fairly low-level, immediate stimulus response
reflexes and large major states, like turning
on some parts of some big blocks of these reflexes
when it's hungry and turn on other blocks
when there's an environmental threat and so forth
or whatever.
Anyway, I forget what--
yes?
AUDIENCE: So in Chapter 3 you talk
about the stages we go do when we face something like your car
breaks down and you can't go to work.
That's the example given in the book.
I'm wondering, how do we decide how we transition
from one stage to another?
And why do you go through the stages of denial, bargaining,
like frustration, depression, and then
like only the last stage seems productive?
I guess, my main question is how do we
decide that we should transition from stage to another
from [INAUDIBLE]
MARVIN MINSKY: That's a beautiful question.
I think it's fairly well understood in the invertebrates
that there are different centers in the brain
for different activities.
And I'm not sure how much is known
about how these things switch.
How does an animal decide whether it's time to--
for example, most animals are either diurnal or nocturnal.
So some stimulus comes along, like it's getting dark,
and a nocturnal animal might then start waking up.
And it turns on some part of the brain,
and it turns off some other parts.
And it starts to sneak around looking for food
or whatever it does at night.
Whereas a diurnal animal, when it starts to get dark,
that might trigger some brain center to turn on,
and it looks for its place to sleep and goes and hides.
So some of these are due to external things.
Then, of course, they're internal clocks.
So for lots of animals, if you put it
in a box that's dimly illuminated
and it has a 24-hour cycle of some sort,
it might persist in that cycle for quite a few days
and go to sleep every 24 hours for half the time and so on.
A friend of mine once decided he would see about this.
And it's a famous AI theorist named Ray Solomonoff.
And he put black paint on all his windows.
And found that he had a 25 or 26-hour natural
cycle, which was very nice.
And this persisted for several months.
I had another friend who lived in the New York subways,
because his apartment was in a building that
had an entrance to the subway.
And he stayed out of daylight for six months.
But anyway, he too found that he preferred
to be on a 25 or 26-hour day than 24.
I'm rambling.
But we apparently have several different systems.
So there's dead reckoning system,
where some internal clocks are regulating your behavior.
And then there are other systems where
your people are very much affected by the amount of light
and so forth.
So we probably have four or five ways
of doing almost everything that's important.
And then people get various disorders
where some of these systems fail.
And a person doesn't have a regular sleep cycle.
And there are disorders where people fall--
what's it called when you fall asleep every few minutes?
AUDIENCE: Narcolepsy.
MARVIN MINSKY: Narcolepsy and all sorts
of wonderful disorders just because the brain has evolved
so many different ways of doing anything that's very important.
Yeah?
AUDIENCE: Can you describe the best piece of criticism
for the society of mind theory?
MARVIN MINSKY: Best piece of what?
AUDIENCE: The best criticism.
MARVIN MINSKY: Oh.
It reminds me of the article I recent
read about the possibility of a virus for--
what's the disorder where--
AUDIENCE: Alzheimer's.
MARVIN MINSKY: No.
The-- uh-- [LAUGHTER] actually, there
isn't any generally accepted cause for Alzheimer's, as far
as I know.
What?
AUDIENCE: Somebody just did an experiment
where they injected Alzheimer infected matter into someone,
and they got the same plaque.
MARVIN MINSKY: Oh, well, right, I
wonder if that's a popular theory.
No, what's the one where people--
AUDIENCE: Fibromyalgia.
MARVIN MINSKY: Say it again.
AUDIENCE: Fibromyalgia.
MARVIN MINSKY: Yes, right.
That's right, which is not recognized by most theorists
to be a definite disease.
But there's been an episode in which somebody--
I forget what her name is--
was pretty sure that she had found a virus for it.
And every now and then somebody revives that theory
and tries to get more evidence for it.
Anyway, there must be disorders where the programming is bad,
rather than a biochemical disorder, because whatever
the brain is, the adult brain certainly
has a very large component of what
we would, in any other case, consider to be software.
Namely lots of things that you've learned, including ways
for one part of the brain to discover how to
modulate or turn on or turn off other parts of the brain.
And since we've only had this kind of cortex
for 4 or 5 million years, it's probably
still got lots of bugs.
Evolution never knows what--
when you make a new innovation, you
don't know what's going to come after that that might find bugs
and ways to get short-range advantages,
short-term advantages at the expense
of longer-term advantages.
So lots of mental diseases might be software bugs.
And a few of them are known to be
connected to abnormal secretions of chemicals and so forth.
But even in those cases, it's hard to be sure that
the overproduction or underproduction
of a neurologically important chemical is--
what should I call it--
a biological disorder or a functional disorder,
because some part of the nervous system
might have found some trick to cause abnormal secretions
of some substance.
That's the sort of thing that we can
expect to learn a great deal more about
in the next generation because of the lower cost and greater
resolution of brain scanning techniques and--
what's his name-- and new synthetic
ways of putting in fluorescent chemicals into a normal brain
without injuring it much, so that you can now
do sort of macro chemical experiments of seeing what
chemicals are being secreted in the brain
with new kinds of scanning techniques.
So neuroscience is going to be very exciting
in the next generation with all the great new instruments.
As you know, my complaint is that somehow introduction
to the--
I'm not saying any of the present AI theories have been
confirmed to tell you that the brain works as such and such
a rule-based system or such and such a--
or use Winston-type representations
or Roger Shank-type representations
or scripts or frames or whatever.
And the next to last chapter of the motion machine
sort of summarizes I think almost a dozen different AI
theories of ways to represent knowledge.
Nobody has confirmed that any of those particular ideas
represent what happens in a mammalian brain.
And the problem to me is that the neuroscience community just
doesn't read that stuff and doesn't design
experiments to look for them.
David has been moving from computer science and AI
into that.
So he's my current source of knowledge about
what's happening there.
Have any of you been following contemporary neuroscience?
That's strange.
Yeah?
AUDIENCE: So you already talked about software a little bit.
So I think they analyze Eisen brain.
And I realize like that's why I talk about glial cells.
And maybe he had a lot of more glial cells than normal humans.
And so do believe that the intelligence of humans
is like more of the software side or on the hardware side?
Like we have computers that are very, very powerful, where
we create software that we can run these machines
that reproduce like humans.
MARVIN MINSKY: I don't see any reason to doubt it.
As far as we know computers can simulate anything.
What they can't do yet, I suppose,
is simulated large scale quantum phenomenon,
because if you know the Feynman theory of quantum mechanics
is that if you have a network of physical systems
that are connected, then it's in the nature of physics
that whatever happens from one state to another
in the real universe, whatever happens actually
happens by the wave function.
The wave function represents the sum
of the activities propagating through all possible paths.
So in some sense that's too exponential
to simulate on a computer.
In other words, I believe the biggest supercomputers
can simulate a helium atom today fairly well.
But they can't simulate a lithium atom,
because it's sort of four or five layers of exponentiation.
So it would be 2 to the 2 to the 2 to the 2 and 4 to the 4
to the 4 to the 4.
[INAUDIBLE]
But I suspect that the reason the brain works
is that it's evolved to prevent quantum effects from making
things complicated.
The great thing about a neuron is that, generally speaking,
a neuron fires all or none.
And you get this point--
you have to get a full half volt potentially
between the neurons firing [INAUDIBLE] fluid.
And a half a volt is a big [INAUDIBLE]..
AUDIENCE: So you believe that the software that we have right
now is equivalent to, for example,
the intelligence that we have like in dogs
or, for example, simple animals is like the difference that
like-- do we just need to implement the software,
like multiply the software?
Or so how we need to create a whole software that--
MARVIN MINSKY: No, there doesn't seem
to be much difference in the architecture,
in the local architecture of--
AUDIENCE: Turn your microphone on.
The one in your pocket.
MARVIN MINSKY: Oh, did I turn it off again?
AUDIENCE: Yes.
MARVIN MINSKY: It's not green.
AUDIENCE: Yeah, so throw the switch.
Is it green now?
MARVIN MINSKY: Now, it's green.
The difference between the dog and the person
is the huge frontal cortex.
I think the rest of it is fairly similar.
And I presume the hippocampus and amygdala and the structures
that control which parts of the cortex
are used for what are somewhat different.
But the small details of the--
all mammalian brains are practically the same.
I mean, basically, you can't make an early genetic change
in how neurons work where all the brain
cells of the offspring would be somewhat different
and the thing would be dead.
So evolution has this property that generally there
are only two places in the development of an embryo
that evolution can operate.
Namely in the pre-placental stage,
you can change the way the egg breaks up and evolves.
And you can have amazing things like identical twins
happen without any effect on the nature of the adult offspring.
Or you can change the things that happened most recently
in evolution like little tweaks in how
some part of the nervous system works,
if it doesn't change earlier stages, what you--
However, mutations that operate in the middle of all that
and change in the number of segments in the embryo,
I guess you could have a longer tail or a shorter tail.
And that won't effect much.
But if you change the 12 segments of the spine
that the brain develops from, you'd
get a huge alteration in how that animal will think.
In other words, evolution cannot change intermediate structures
very much or the animal won't live.
Bob Lawler.
AUDIENCE: If one thinks of comparing a person to a dog,
would it not be most appropriate to think of those persons who
were like the wild boy of southern France
who grew up in the woods without any language
and say that if you're going to look
at individual's intelligence that
would be a fair comparison with the dog.
Whereas what we have when we think of people today
is people who have learned so much through interaction
with other people that the transmission of culture,
is not essentially ways of thinking
that have been learned throughout the history
of civilization and some of us are able to pass on to others?
MARVIN MINSKY: Oh, sure.
Although if you expose a dog to humans,
he doesn't learn language.
So--
AUDIENCE: He may or may not come if you call him.
MARVIN MINSKY: Right.
But presumably language is fairly recent.
So you could have mutations in the structure of the language
centers and still have a human that's alive.
And it might be better at language than most other people
or somewhat worse.
So we could have lots of small mutations in anything
that's been recently evolved.
But the frontal cortex is--
the human cortex is really very large
compared to the rest of the brain.
Same in dolphins and a couple of other animals, I forget,
whales.
yeah?
AUDIENCE: So the reason why I ask
that is that it seems to me that we have some quality,
like some kind of--
we can see the world--
like add some qualities to the world.
And like this is what I would call consciousness.
And like for me, it seems that dogs also
have this quality of like seeing the world
and like adding qualities to the world, so like maybe,
this is good, this is bad.
Like there are different qualities for different beings.
And like the software that we produce right now
seems to be maybe faster and like maybe do more tests
than what maybe a dog does.
But for me, it doesn't seem that it has essential display
quality--
I think like it doesn't have consciousness
in the sense it doesn't like abrogate quality to the things
in the world maybe.
MARVIN MINSKY: Well, I think I know what you're getting at.
But you're using that word consciousness,
which I've decided to abandon, because it's
36 different things.
And probably a dog has 5 or 6 of them or 31.
I don't know.
But one question is, do you think
a dog can think several steps ahead and consider
two alternative--
that's funny.
Oh, let's make this abstract.
So here's a world.
And the dog is here.
And it wants to get here.
And there are all sorts of obstacles in it.
So can the dog say, well, if I went
this way I'd have such and such difficulty, whereas if I
went this way, I'd have this difficulty.
Well, I think this one looks better.
Do you think your dog considers two or three alternatives
and makes plans?
I have no idea.
But the curious thing about a person
is you can decide that you're going
to not act in the situation until you've
considered 16 plans.
And then one part of your brain is
making these different approaches to the problem.
And another part of your brain is saying,
well, now, I've made five plans, and I'm beginning
to forget the first one.
So I better reformulate it.
And you're doing all of these self-conscious in the sense
that you're making plans that involve predicting what
decisions you will make.
And instead of making them, you make the decision
to say I'm going to follow out these two plans
and use the result of that to decide which one to.
Do you think a dog does any of that?
Does it look around and say, well,
I could go that way or this way?
Hmm.
I remember our dog was good at if you'd throw
a ball it would go and get it.
And if you threw two balls it would go and get both of them.
And sometimes if you threw three balls,
it would go and get them all.
And sometimes if a ball would roll under a couch
that it couldn't reach, it would get the other two,
and it would think.
And then it would run back to the kitchen
where that ball is usually found.
And then it would come back disappointed.
So what does that mean?
Did it have parallel plans?
Or does it make a new one when the previous one fails?
And they're not actually parallel.
What's your guess?
How far ahead does a dog think?
Do you have a dog?
AUDIENCE: Yeah.
I do have a dog.
But I don't believe that's the essential part of beings
that have some kind of advanced brain.
Like we can plan ahead.
Humans can plan ahead.
But I don't think they are the fundamental part
of intelligence.
Like humans, I think Winston says
that humans are better than the primates
in like they can understand stories
and they can join together stories.
But somehow I don't buy the story that primates are just
like rule planners.
I think somehow we have some quality meshing of the world
and like somehow we're not writing a software.
MARVIN MINSKY: But, you know, it's funny.
Computer science teaches us things
that weren't obvious before.
Like it might turn out that if you're a computer
and you only have two registers, then--
well, in principle, you could do anything,
but that's another matter.
But it might turn out that maybe a dog has only two registers
and a person has four.
And a trivial thing like that makes
it possible to have two plans and put them in suspense
and think about the strategy and come back and change one.
Whereas if you only had two registers,
your mind would be much lower order.
And there's no big difference.
So computer science tells us that the usual way of thinking
about abilities might be wrong.
Before computer science, people didn't really
have that kind of idea.
Many years ago, I was in a contest--
I mean, you know, a science, because some of our friends
showed that you could make a universal computer with four
registers.
And I had discovered some other things,
and I managed to show that you could make a universal computer
with just two registers.
And that was a big surprise to a lot of people.
But there never was anything in the history
of psychology of that nature.
So there never were really technical theories of--
it's really computational complexity.
What does it take to solve certain kinds of problems?
And until the 1960s, there weren't any theories of that.
And I'm not sure that that aspect of computer sciences
actually reach many psychologists
or neuroscientists.
I'm not even sure that it's relevant.
But it's really interesting that the difference
between 2 and 3 registers could make an exponential difference
in how fast you could solve certain kinds of problems
and not others.
So maybe there'll be a little more mathematical psychology
in the next couple of decades.
Yeah.
AUDIENCE: So in artificial intelligence,
how much of our effort should be devoted
to a kind of reflecting on our thinking as humans
and trying to figure out what's really going on
inside our brains and trying to kind of implement
that versus observing and identifying
what kinds of problem we, as humans, can solve and then come
up with an intuitive way for a computer
to kind of in a human-like way solve these problems?
MARVIN MINSKY: They're a lot of nice questions.
I don't think it doesn't make any sense
to suggest that we think about what's happening in our brains,
because that takes scientific instruments.
But it certainly makes sense to go over
older theories of psychology and ask
to solve a certain kind of problem,
what kind of procedures are absolutely necessary?
And you could find some things like that, like how
many registers would you need and what kinds of conditionals
and what kind of addressing.
So I think a lot of cognitive psychology, modern cognitive
psychology, is of that character.
But I don't see any way to introspect well enough
to guess how your brain does something,
because we're just not that conscious.
You don't have access to--
you could think for 10 years about how
do I think of the next word to speak, and unlikely
that you would--
you might get some new ideas about how
this might have happened, but you couldn't be sure.
Well, I take it back.
You can probably get some correct theories
by being lucky and clever.
And then you'd have to find a neuroscientist
to design an experiment to see if there's
any evidence for that.
In particular, I'd like to convince
some neurologists to consider the idea of k-lines.
It's described I think in both of my books.
And think of experiments to see if you could get them to light
up or otherwise localize in--
once you have in your mind the idea that maybe the way one
brain connects--
sends information to another is over something like k-lines,
which I think I talked about that the other day--
random superimposed coding on parallel wires,
then maybe you could think of experiments
that even present brain scanning techniques
could use to localize these.
My main concern is that the way they do brain scanning now
is to set thresholds to see which brain centers light up
and which turn off.
And then they say, oh, I see this activity looks
like it happens in the lateral hippocampus
because you see that light up.
I think that there should be at least a couple
of neuroscientist groups who do the opposite, which
is to reduce the contrast.
And when there are several brain centers that
seem to be involved in an activity,
then say something to the patient and look for one area
to get 2% dimmer and another to look 4% brighter
and say that might mean that there's
a k-line going from this one to that one
with an inhibitory effect on this or that.
But as far as I know right now, every paper
I've ever seen published showing brain centers lighting up
has high contrast.
And so they're missing all the small things.
And maybe they're only seeing the end result of the process
where a little thinking has gone on with all these intricate
low intensity interactions, and then the thing
decides, oh, OK, I'm going to do this.
And you conclude that that brain center which lit up
is the one that decided to do this,
whereas it's the result of a very small, fast avalanche.
AUDIENCE: Have you seen the one a couple of weeks
ago about reading out the visual in real time?
MARVIN MINSKY: From the visual cortex?
AUDIENCE: Yes.
Quite a nice half, they aren't actually
reading out the visual field.
For each subject, they do a massive amount of training
where they flash thousands of 1-second video clips
and assemble a database of very small perturbations
in different parts of the visual cortex lighting up.
And they show a novel video to each of the subjects
and basically just do a linear combination
of all of the videos that they have
done in the training phase weighted by how closely things
line up in the brain.
And you can sort of see what's going on.
It's quite striking.
MARVIN MINSKY: Can you tell what they're thinking?
AUDIENCE: You can only tell what they're seeing.
But I think--
MARVIN MINSKY: You know, if your eyes are closed,
your primary visual cortex probably doesn't do anything,
does it?
AUDIENCE: I think it's just--
yeah.
MARVIN MINSKY: But the secondary one
might be representing things that might be.
AUDIENCE: Yes.
So the goal of the authors of this paper
is eventually to literally make movies out of dreams.
But that's a long way off.
MARVIN MINSKY: It's an old idea in science fiction.
How many of you read science fiction?
Wow, that's a majority.
Who's the best new writer?
AUDIENCE: Neal Stephenson.
MARVIN MINSKY: He's been writing a long time.
AUDIENCE: He's new compared to Heinlein.
[LAUGHTER]
MARVIN MINSKY: I had dinner with Stephenson
at the Hillis's a couple of years ago.
Yeah?
AUDIENCE: So from what I understood,
it seems that you're saying that the difference between us
and like, for example, dogs is just a computational power.
So do you believe that the difference
between dogs and computers is also just computational?
Like what's the difference between dogs and like Turing
machine?
Or there is no difference?
MARVIN MINSKY: It might be that only humans and maybe
some of their closest relatives can imagine a sequence.
In other words, the simplest and oldest theories in psychology
were the theories like David Hume had
the idea of association, one idea in the mind or brain
causes another idea to appear in another.
So that means that a brain that's learned associations
or learn if/then rule-based systems
can make chains of things.
But the question is, can any animal, other than humans,
imagine two different situations and then compare them and say,
if I did this and then that, how would the result differ
from doing that and then this?
If you look at Gerry Sussman's thesis--
if you're at MIT, a good thing to do
and you're taking your course, you
should read the PhD thesis of your professor.
It not only will help you understand
better what the professor said, you'll
get a higher grade, if you care, and many other advantages.
Like you'll actually be able to talk to him
and his mind won't throw up.
So, you know, I don't know if a dog can recapitulate as--
can the dog think, I think I'll go around this fence
and when I get to this tree I'll do this, I'll pee on it--
that's what dogs do--
whereas if I go this way something else will happen?
It might be that you that pre-primates
can't do much of that.
On the other hand, if you ask, what is the song of the whale?
What's the whale that has this 20-minute song?
My conjecture is that a whale has
to swim 1,000 miles or several hundred miles sometimes
to get the food it wants because things change.
And each group of whales--
humpback whales, I guess, sing this song
that's about 20 minutes long.
And nobody has made a good conjecture
about what the content of that song,
but it's shared among the animals.
And they can hear it 20 or 50 miles away and repeat it.
And it changes every season.
So I suspect that the obvious thing that it should be about
is where's the food these days, where
are the best flocks of fish to eat,
because a whale can't afford to swim 200 miles to the place
where its favorite fish were last year and find it empty.
It takes a lot of energy to cross the ocean.
So maybe those animals have the ability
to remember very long sequences and even
some semantics connected with it.
I don't know if dogs have anything like that.
Do dogs ever seem to be talking to each other?
Or they just--
AUDIENCE: I have a story dogs.
So apparently in Moscow, not all dogs,
but a very small fraction of the stray dogs in the city
have learned how to ride the metro.
They live out in the suburbs because I
guess people give them less trouble when they're out
in the suburbs.
And then they take the subway each day
into the city center where there are more people.
And they have various strategies for begging in the city center.
So for instance, they find some guy with a sandwich,
and they bark really loudly behind the guy,
and the guy would drop the sandwich.
And then they would steal it.
Or they have a pack of them, and they all know each other.
And they send out a really cute one to beg for food,
and so they'll give the cute one food.
And the cute one brings it back to everyone else.
And simply navigating the subway is actually a bit complicated
for a dog, but somehow a very small group of a dogs in Moscow
have learned how to do it, like figure out where their stop is,
get on, get off.
MARVIN MINSKY: Yeah, our dog once hopped on the Green Line
and got off at Park Street.
So she was missing for a while.
And somebody at Park Street called up
and said your dog is here.
So I went down and got her.
And the agent said, you know, we had
a dog that came to Park Street every day and changed trains
and took the Red Line to somewhere.
And finally, we found out that its master had--
it used to go to work with its owner every day, and he died.
And the dog took the same trip every day and.
The T people understood that he shouldn't be bothered with.
Our dog chased cars.
Was it Jenny?
And that was terrible because we knew she was going to get hurt.
And finally, a car squashed her leg,
and she was laid up for a while with a somewhat broken leg.
And I thought, well, she won't chase cars anymore.
But she did.
But what she wouldn't do is go to the intersection of Carlton
and Ivy Street anymore, which is--
so she had learned something.
But it wasn't the right thing.
I'm not sure I answered your--
AUDIENCE: Actually, according to--
there's this story that you gave in Chapter 2
about the girl who was digging dirt.
So in the case where she learns whether in digging dirt
is a good or bad activity is when
there is somebody with whom she had an attachment bond
present who's telling her whether it's good or bad.
And in the case where she learned to avoid
that fight is when something bad happens to her in the spot.
So in a sense, the dog is behaving just like that logic.
MARVIN MINSKY: Yes.
Except that the dog is oriented toward location rather than
something else.
So--
AUDIENCE: Professor, can you talk about possible hierarchy
or representations schemes of knowledge,
like semantic is on top.
And at the bottom, there's like--
you're mentioning in the middle of k-lines
they were on the bottom.
There's things up there.
So the way I thought about the present therapist
asked that humans--
it's just natural that you need all
of the immediate representation in order to support something
like semantic nets.
And it seems natural to me to think
that humans have all these double hierarchy
of representations, but dogs might
have something only in the middle,
like they only have something like neuronets or something.
So my question is, what behaviors
that you could observe in real life could only
be done with one of these intermediate representations
of knowledge that can't be done with something like machine
learning?
MARVIN MINSKY: Hmm, you mean machine learning
of some particular kind?
AUDIENCE: That's currently fashionable I think.
Kind of like with brute force of calibration of some parameter.
It seems to me that if you recognize a behavior like that,
it might be a worthy intermediate goal
to be able to model that instead of trying to model something
like natural language, which is you
might need the first part to get the second part.
MARVIN MINSKY: Well, it would be nice to know--
I wonder how much is known about elephants, which
are awfully smart compared to--
I suspect that they are very good at making plans,
because it's so easy for an elephant
to make a fatal mistake.
So unfortunately, probably no research group
has enough budget to study that kind of animal,
because it's just too expensive.
How smart are elephants?
Anybody-- I've never interacted with one.
I'm not sure if you have a question.
AUDIENCE: I think the question is are there behaviors
that you need an intermediate level of the repetition
of knowledge in order to perform that you
don't need like the highest level like semantic--
like basically natural language to do.
So you could say that by some animal doing this behavior,
I know that it has some intermediate level
of representation of knowledge that's
more than kind of a brute force machine learning approach.
Because like what's discussed before,
a computer can do path finding, which
is like a brute force approach.
I don't think that's how humans do it or animals do it.
MARVIN MINSKY: I can't think of a good--
it's just hard to think of any animals besides us that
have really elaborate semantic networks.
There's Koko, who is a gorilla that apparently
had hundreds of words.
But--
AUDIENCE: I think the question is
to find something that's lower than words, like maybe Betty
the crow--
MARVIN MINSKY: With that stick, yeah.
How many of you seen the crow movie?
She has a wire that she bends and pulls something out
of a tube.
But--
AUDIENCE: I don't think machine learning can do that.
But I don't think you need semantic nets either.
MARVIN MINSKY: I have a parrot who lives
in a three-dimensional cage.
And she knows how to get from any place to another.
And if she's in a hurry, she'll find a new way
at the risk of injuring a wing, because there
are a lot of sticks in the way.
So flying is risky.
Our daughter, Julie, once visited Koko, the gorilla.
And she was introduced--
Koko's in a cage.
And Penny, who is Koko's owner, introduces
Julie in sign language.
It's not spoken.
It's sign language.
So Julie gets some name.
And she's introduced to Koko.
And Koko likes Julie.
So Koko says, let me out.
And Penny says, no, you can't get out.
And Koko says, then let Julie in.
And I thought that showed some fairly abstract reasoning
or representation.
And Penny didn't let Julie in.
But Koko seemed to have a fair amount of declarative syntax.
I don't know if she could do passives or anything like that.
If you're interested, you probably
can look it up on the web.
Penny's owner-- I mean Penny thought that Koko
knew 600 or 700 words.
And a friend of ours was a teenager who worked for her.
And what's his name?
And he was convinced that Koko knew more than 1,000 words.
But he said, you see, I'm a teenager
and I'm still good at picking up gestures and clues better
than the adults here.
But anyway I gather Koko is still there.
And I don't know if she's still learning more words.
But every now and then we get a letter asking
to send more money.
Oh, in the last lecture, I couldn't
think of the right crypto arithmetic example.
I think that's the one that the Newell Simon book starts out
with.
So obviously, m is 1.
And then I bet some of you could figure that out
in 4 or 5 minutes.
Anybody figured it out yet?
Help.
Send more questions.
Yeah?
AUDIENCE: I have an example.
For instance, I go out to a restaurant
of this type of exotic food that I've never ever had before.
And I end up getting sick from it.
So what determines what I learned from this?
Because there are many different possibilities.
There is the one possibility of I
learned to avoid the specific food I ate.
Another possibility is like I learn
to avoid that type of food, because it
might contain some sort of spice that I react to badly.
And a third possibility-- there might be more--
I learn to avoid that restaurant,
because it just might be a bad restaurant.
So in this case, it's not entirely clear
which one to pick.
And, of course, in real life, I might
go there again and comparatively try another food
or try the same food at a different restaurant.
But what do you think about this on that scenario, what causes
people to pick which one?
MARVIN MINSKY: The trouble is we keep thinking of ourselves
as people.
And what you really should think of yourself
as a sort of Petri dish with a trillion bacteria in it.
And it's really not important to you
what you eat, but your intestinal bacteria are
the ones who are really going to suffer,
because they're not used to anything new.
So I don't know what conclusion to draw from that.
But--
AUDIENCE: Previously, you mentioned
that David Hume thought that knowledge
represented as associations.
And that occurs to me as being some sort of like a Wiki
structure where entries have tags.
So an entry might be defined by what tags it has
and what associations it has.
I'm wondering if that structure has
been-- if somebody has attempted to code that
into some kind of peripheral structure,
has there been any success with putting
that idea into a potential AI.
MARVIN MINSKY: I don't know how to answer that.
Do any psychologists use semantic networks
as representations?
Pat, do you know, has anybody--
is anyone building an AI system with semantic representations
or semantic networks anymore?
Or is it all--
everything I've seen is gone probabilistic
in the last few years.
Your project.
Do you have any competitors?
AUDIENCE: No.
MARVIN MINSKY: Any idea what the IBM people are using?
I saw a long article that I didn't read, yet but--
AUDIENCE: Traditional information retrieval
plus 100 hacks plus machine learning.
MARVIN MINSKY: They seem to have a whole lot
of slightly different representations
that they switch among.
AUDIENCE: But none of them are very semantic.
AUDIENCE: Well, they probably have--
I don't know, does anybody know what the answer is?
But they must have a little frame-like things
for the standard questions.
MARVIN MINSKY: Of course, the thing doesn't answer any--
it doesn't do any reasoning as far as you can tell.
AUDIENCE: Right.
MARVIN MINSKY: So it's trying to match sentences
in the database with the question.
Well, what's your theory of why there
aren't other groups working on what we used to and you are?
AUDIENCE: Well, multiples are computing is a fad.
And if you can do better in less time that way
than figuring it out how it really works,
then that's what you do.
No one does research on chess, no one
does a research on how humans might play chess,
because the bulldozer computers have won.
MARVIN MINSKY: Right.
There were some articles on chess and checkers
early in the game.
But nothing recent as far as I know.
AUDIENCE: So in many ways it's a local maximum phenomenon.
So bulldozer computing stuff has got up
to a certain local maximum.
Until you can do better than that some other way, then
[INAUDIBLE]
MARVIN MINSKY: Well, I wonder if we
could invent a new TV show where the questions are interesting.
Like I'm obsessed with the question
of why you can pull something with a string,
but you can't push it.
And, in fact, what was this-- we had
a student who actually did something with that a long time
ago.
But I've lost track of him.
But how could you make a TV show that
had common sense questions rather than ones about sports
and actors?
AUDIENCE: Well, you don't you imagine what
happens when you push a string?
It's hard to explain the--
MARVIN MINSKY: It buckles.
AUDIENCE: It's easy to imagine.
MARVIN MINSKY: Yeah, So you can simulate it.
AUDIENCE: Yeah.
MARVIN MINSKY: Yeah.
AUDIENCE: I have a question.
So suppose in the future we can create
a robot as intelligent as human as smart,
and how we should evaluate it?
When do we know that we reach like certain things like which
test should pass or which [INAUDIBLE] should [INAUDIBLE]??
So for example, [INAUDIBLE] asked some pretty hard
questions and seem to be intelligent.
But what all it is doing is doing some other attempts
and then calculating some probability and stuff.
Humans don't do that.
They try to understand the question and look to answer it.
But then suppose you can create a robot that
can behave as it is like--
I don't know, how would you evaluate
when do you know that you reach something?
MARVIN MINSKY: That's sort of funny,
because if it's any good, you wouldn't have that question.
You'd say, well, what can't it do?
And why not?
And you'd argue with it.
In other words, people talk about passing the Turing test,
or whatever.
And it's hard to imagine a machine that you converse
with for a while and then when you're told it's a machine,
you're surprised.
AUDIENCE: So I think, for example,
you can make a machine to say some very intelligent and smart
things, because like it may know,
it takes all this information from different books
and all this information that it has somewhere in a database,
right.
But then like when people speak they kind of dissent
when you're speaking.
How do you know like some robot understands something
or doesn't understand?
Or does it have to understand at all?
MARVIN MINSKY: Well, I would ask it questions like why can't you
push something with a string?
Anyone have a Google working?
What does Google say if you ask it that?
Maybe it'll quote me.
Or someone-- yeah?
AUDIENCE: How would you answer that question, like why
can pull, but not break?
MARVIN MINSKY: I'd say, well, it would buckle.
And then they would say, what do you mean by buckle?
And then I'd say, oh, it would fold up
so that it got shorter without exerting any force at the end.
Or blah, blah.
I don't know.
There are lots of answers.
How would you answer it?
A physicist might say, if you've got it really very, very,
very straight, you could push it with a string.
But quantum mechanics would say you can't.
Yeah.
AUDIENCE: I feel like if you--
like the [INAUDIBLE] or like an interesting show
would be like an alternate cooking
show or something where you have to use
object that's like not normally found to have that use.
So like I want to paint a room, but you're not given a brush.
You're given like a sponge.
Or people pull up like eggplants want it painted purple.
So it has to represent the thing in a different way other than--
MARVIN MINSKY: Words.
That's interesting.
When I was in graduate school, I took a course in knot theory.
And, in fact, you couldn't talk about them.
And if anybody had a question, they'd
have to run up to the board.
And, you know, they'd have to do something like this.
Is that a knot?
No.
No, that's just a loop.
But if you were restricted to words,
it would take a half hour to--
that's interesting.
Yeah?
AUDIENCE: You mentioned solving the strange puzzle by imagining
the result. And I think heard someone else say,
computers can do that in some way.
It can simulate a string.
And we know enough physics that you
can give a reasonable approximation of string.
But I find that the question that is often not asked in AI
is--
or by computers-- is how does one choose the correct model
with which to answer questions?
There's a lot of questions we're really good at answering
with computers.
And some of them, we have genetic algorithms they're good
for, some of them based in statistics, some of them
formal logic, some of them basic simulation.
But this is all--
to me this is the core question, because this
is what people decide, and no one
seems to have ever tackled an [INAUDIBLE]..
MARVIN MINSKY: Well, for instance,
if somebody asks the question, you
have to make up a biography of that person.
So because the same question from different people
would get really different answers.
Why does a kettle make a noise when the water boils?
If you know that the other person is a physicist,
and it's easy to think of things to say, but--
it's not a very good example.
What's the context of that?
In a human conversation, how does each person
know what to say next?
AUDIENCE: I guess one question is,
how do people decide what evidence to use
to tackle a problem?
And I guess, the more fundamental question
is, when people are solving problems,
how do they decide how they're going
to think about the problem?
Are they going to think about it by visualizing it?
Think about it by trying to [INAUDIBLE]
Think about it by analogy or formal logic?
Of all the tools we have, why do we pick the ones we do?
MARVIN MINSKY: Yeah, well, that goes
back to if you make a list of the 15 most common ways
to think and somebody asks you a question or asks,
why does such and such happen, how do you decide which
of your ways to think about it?
And I suspect that's another knowledge base.
So we have commonsense knowledge about,
you know, if you let go of an object, it will fall.
And then we have more general knowledge
about what happens when an object falls.
Why didn't it break?
Well, it actually did.
Because here's a little white thing, which turned into dust.
And so that's why I think you need to have five or six
or how many different levels of representation.
So as soon as somebody asks a question,
one part of your brain is coming up with your first idea.
Another part of your brain is saying,
is this a question about physics or philosophy
or is it a social question?
Did this person ask it because they actually want to know
or they want to trap me?
So I think you--
generally this idea of this--
there must be many kinds of society of mind models
that people have.
And each person, whenever you're talking to somebody,
you choose some model of what is this conversation about?
Am I trying to accomplish something by this discussion?
Is it really an interesting question?
Do I not want to offend the person
or do I want to make him go away forever?
And little parts of your brain are making all these decisions
for you.
I'd like to introduce Bob Lawler, who's visiting.
AUDIENCE: One of my favorite stories about Feynman,
it comes from asking him to dinner one night.
And I asked him how he got to be so smart.
And he said that when he was an undergraduate here,
he would consider every time he was able to solve a problem,
just the beginning step of how to exploit that.
And what he would then do would be
to try to reformulate the problem
in as many different representations as he could.
And then use his solution of the first problem as a guide
in working out alternate representations and procedures
in that.
The consequence according to him was
that he became very good at knowing which
was the most fit representation to use
in solving any particular problem that he encountered.
And he said that that's where his legendary capability
in being so quick with good solutions and good methods
for solutions came from.
So maybe a criteria for an intelligent machine
will be one that had a number of--
15 different ways of thinking and applied them regularly
to develop alternative information
about different methods of problem solving.
You would expect it then to have some facility at choosing
based on its experience.
MARVIN MINSKY: Yeah, he wrote something about--
because the other physicists would
argue about whether to use Heisenberg matrices
or Schrodinger's equation.
And he thought he was the only one who
knew how to solve each problem both ways, because most
of the other physicists would get
very good at one or the other.
He had another feature which was that if you argued with him,
sometimes he would say, oh, you're right, I was wrong.
Like he was once arguing with Fredkin
about could you have clocks all over the universe
that were synchronized.
And the standard idea is you couldn't because of relativity.
And Fredkin said, well, suppose you start out on Earth
and you send a huge army of little bacteria-sized clocks
and send them through all possible routes to every place
and figure out and compensate for all the accelerations they
had experienced on the path.
Then wouldn't you get a synchronous time everywhere?
And Feynman said, you're right, I was wrong--
without blinking.
He may have been wrong, but--
More questions?
AUDIENCE: Along the same line as his question
about how do we know what method to use for solving problems.
Kind of curious how we know what data set
or what data to use when solving a problem.
Because we have so much sensory information
at any moment and so much data we have from experience.
But like when you get a problem, you instantly--
and I guess k-line is sort of a solution for that.
But I'd be curious how you could possibly represent good data
relationships in a way that a computer might be able to use.
Because like right now, the problem
is that we always have to very narrowly define
a problem for a machine to be able to solve it.
But I feel like if we could come up
with good methods for filtering massive data
sets to justify what might be relevant that doesn't involve
like trial and error.
MARVIN MINSKY: Yes, so the thing must
be that if you have a problem, how do you characterize it?
How do you think, what kind of problem is this
and what method is good for that kind of problem?
So I suppose that people vary a lot.
And it's a great question.
That's what the critics do.
They say what kind of problem is this?
How do I recognize this particular predicament?
And I wish there were some psychologists who
thought about that the way Newell and Simon did, god,
in the 1960s.
That's 50 years ago.
How many of you have seen that book
called Human Problem Solving.
It's a big, thick book.
And it's got all sorts of chapters.
That's the one I mentioned the other day where they actually
had some theories of human problem
solving and simulated this.
They gave subjects problems like this and said,
we want you to figure out what numbers those are.
And they lied to the subjects and said,
this is an important kind of problem in cryptography.
The secret agents need to know how
to decode cryptograms of this sort, where usually it's
the other way around.
The numbers stand for letters.
And there's some complicated coding.
But these are simple cases.
So you have to figure out that sort of thing.
And then the book has various chapters
on theories of how you recognize different kinds of problems
and select strategies.
And, of course, some people are better than others.
And believe it or not, at MIT there was almost a whole decade
of psychologists here who were studying the psychology
of 5-person groups.
Suppose you take five people and put them in a room
and give them problems like this, or not the same cryptic,
but little puzzles that require some cleverness to solve.
And you record and video.
They didn't have video in those days.
So it was actual film.
And there's a whole generation of publications
about the social and cognitive behavior of these little groups
of people.
They zeroed in on 5-person groups for reasons
I don't remember.
But it turned out that almost always when
you had the group divided into two competitive groups with two
and three, every now and then they would reorganize.
But it was more a study in social relations
than in cognitive psychology.
But it's an interesting book.
There must be contemporary studies like that
of how people cooperate.
But I just haven't been in that environment.
Any of you taken a psychology course recently?
Not a one?
Just wonder what's happened to general psychology.
I used to sit in on Tauber and a couple of other lecturers here.
And psychology, of course, was sort of
like 20% optical illusions.
AUDIENCE: Yeah, they still do that--
MARVIN MINSKY: Stuff like that.
AUDIENCE: They also concentrate a lot
on development psychology.
MARVIN MINSKY: Well, that's nice to hear,
because I don't believe there was
any of that in Tauber's class
AUDIENCE: I think Professor Gabrieli now teaches
the introductory psychology.
And he--
MARVIN MINSKY: Do they still believe Piaget or do
they think that he was wrong?
AUDIENCE: I think they probably take the same approach
as with like Freud, they would say
great ideas and a revolution, but they also don't
think he's the end of the--
MARVIN MINSKY: Well, he got--
AUDIENCE: I know the childhood development class,
you read Piaget, his books.
MARVIN MINSKY: Yeah.
In Piaget later years, he got algebra.
And he wanted to be more scientific and studied
logic and few things like that and became less scientific.
It was sort of sad to--
I can imagine being browbeaten by mathematicians,
because they're the ones who were getting published.
And he only had-- how many books did Piaget--
AUDIENCE: But if I may add a comment about Piaget.
It really comes from an old friend of many of us, Seymour.
As you know, he was, of course, Piaget's mathematician
for many years.
MARVIN MINSKY: We got people from Piaget's lab.
AUDIENCE: But Seymour said that he felt that Piaget's best
work was his early work, especially
like building his case studies.
And one time when we were talking
about the issue of focusing from the AI lab
and worked on in psychology here,
Seymour said he felt that was less than necessary than more
of a concentration on AI, because he expected
in the future the world of study of the mind
would separate into two individual studies, one much
more biological, like the neurosciences of today,
and the other focus more on the structure of knowledge
and on representations and in effect
the genetic epistemology of Piaget.
Then he added that something was a quote later.
And it was, "Even if Piaget's marvelous theory today
proved to be wrong, he was sure that whatever replaced
it would be a theory that the same sort, one
of the development of knowledge in all its changes."
So I don't think people will get away from Piaget however much
they want.
MARVIN MINSKY: I don't think so either.
I meant to introduce our visitor here, because Bob Lawler here
has reproduced a good many of the kinds of studies
that Piaget did in the 1930s and '40s.
And if you look him up on the web--
you must have a few papers.
AUDIENCE: I better tell you what the website is, because it
still hidden from web prose.
It's nlcsa.net.
MARVIN MINSKY: That would be hard to--
AUDIENCE: Natural Learning Case Study Archive dot net.
It's still in process, still in development.
But it's worth looking at.
MARVIN MINSKY: How many children did Piaget have?
AUDIENCE: Well, Piaget had three children--
MARVIN MINSKY: So did you--
AUDIENCE: Not in his study.
But what he did was to mix together
the information from all three studies
and supported the ideas with which he began.
So it was illustrations of his theories.
MARVIN MINSKY: Anyway, Bob, has quite a lot of studies
about how his children developed concepts of number and geometry
and things like that.
And I don't know of anyone else since Piaget
who has continued to do those sorts of experiments.
There were quite a lot at Piaget's institute in Geneva
for some years after Piaget was gone.
But I think it's pretty much closed now, isn't it?
AUDIENCE: Well, the last psychologist Piaget
hired Jacques Benesch, who was no longer at the university.
He retired.
And it has been taken over by the neo-Piagetians, who
are doing something different.
MARVIN MINSKY: Is there any other place?
Well, there was Yoichi's lab on children in Japan.
AUDIENCE: There are many people to take Piaget seriously
in this country and others.
AUDIENCE: So Robert mentioned that Feynman
had more representations of the world than like usual people.
Like when I talked about Eisen and the glial cells,
I referred to that because I believe
that k-lines is our way of representing the world.
And maybe Eisen had better ways of representing the world.
And I believe that, for example, agents as resources
are not different from Turing machines.
You can create a very simple Turing
machine that act like agents, and you
have some mental states.
But there is no, I believe, good way of representing the world
and updating the representation of the world.
Like it seems to me that when you grow up,
you are learning how to represent
the world better and better.
And you have some layers.
And that's all k-lines.
And if glial cells are actually related to k-lines,
it means that Eisen had like a better hardware
representing the world.
And that's why he would be smarter than other people.
MARVIN MINSKY: Well, it's hard to--
I'm sure that that's right that you
have a certain amount of hardware,
but you can reconfigure some of it.
Nobody really knows.
But some brain centers may have only a few neurons.
And maybe there's some retrograde signals.
So that if two brain centers are simultaneously activated,
then usually the signals only go one wave,
from one to the other.
Have to go through a third one to get back.
But it could be that the brain--
that the neurons have property that if two centers are
activated, maybe that causes more connections
to be made between them that can then be programmed more.
I don't think anybody really has a clear idea of whether you
can grow new connections between brain centers
that are far apart.
Does anybody know?
Is there anything--
AUDIENCE: It used to be common knowledge
that there was no such thing as adult neurogenesis.
And now it is known that it exists
in certain limited regions of the brain.
So in the future, it may be known
that it exists everywhere.
MARVIN MINSKY: Right.
Or else that those experiments were wrong.
And they were in a frog rather than a person.
AUDIENCE: Lettvin claimed that you
could take a frog's brain out and stick it in backwards
and pretty soon it would behave just like it used to.
MARVIN MINSKY: Lettvin said?
AUDIENCE: Yeah.
Of course.
I don't know if he was kidding or not.
You never could tell.
MARVIN MINSKY: You could never tell when he was kidding.
Lettvin was a neuroscientist here
who was sort of one of the great all time neuroscientists.
He was also one of the first scientists
to use transistors for biological purposes
and made circuits that are still used in every laboratory.
So he was a very colorful figure.
And everyone should read some of his older papers.
I don't know that there were any recent ones.
But he had an army of students.
And he was extremely funny.
What else?
AUDIENCE: So continuing on the idea of hardware
versus software, what do you think about the idea
that intelligence or humans may need strong instincts as when
they're born in order like-- hence
the interplay between their instincts,
like they know to cry when they're hungry
or to look for their mother.
They need these instincts in order
to develop higher orders of knowledge.
MARVIN MINSKY: You'd to ask L Ron Hubbard for--
I don't recall any real attempts to--
I don't think I've ever run across anybody claiming
to have correlations between prenatal experience
and the development of intelligence.
AUDIENCE: That's not what I'm talking about.
I'm talking about before intelligence
is being developed, like you learn language,
before you learn language, you need
to have a motivation to do something.
So you need to have instincts, instinctual reactions
to things.
Like traditional experience with knowledge
after you're born, you--
MARVIN MINSKY: Well, children learn language,
you know, 12 to 18 months.
What are you saying that they need some preparation?
I'm not sure what you're asking.
AUDIENCE: So think of it from an engineering point of view.
If you were to build like a robot, what you need to program
is some instincts, some like rule of thumb
algorithms in order to get it started in the world
in order to build experiential knowledge.
MARVIN MINSKY: You might want to build something
like a difference engine, so that you can represent a goal
and it will try to achieve it.
So you need some engine for producing any behavior at all.
AUDIENCE: Right.
So like if you take the approach that like maybe to build an AI,
you should build like an infant robot
and then you teach it as you would like a human child.
Then would it be useful to make it
dependent on like some other figure
in order to help it learn how to do things like a human child
would?
MARVIN MINSKY: Well, in order to learn,
you have to learn from something.
And one way to learn is in isolation, just to have some--
you could build a goal to predict what will happen.
And the best way to predict, as Alan Kay put it once,
the best way to predict the future is to invent it.
So you could make a--
or could put a model of an adult in it to start with, so that--
in other words, one way to make a very smart child
is to copy its mother's brain into a little sub-brain when
it's born.
And then it could learn from that instead
of depending on anybody else.
I'm not sure-- you have to start with something.
Of course, humans, as Bob mentioned or someone mentioned,
if you take a human baby and isolate it,
it looks like it won't develop language by itself, because--
I don't know what because.
In fact, I remember one of our children
who was just learning to talk.
And something came up, and she said, what because is that.
Do you remember?
It took a while to get her to say why.
She would come up and say what because.
And I would say, you're asking why did this.
After a long time she got the hint.
But-- why do all w-h words start with w-h?
AUDIENCE: One of them doesn't--
how.
MARVIN MINSKY: Could you say whow?
How.
Is there a theory?
AUDIENCE: Not that I know of.
MARVIN MINSKY: It's a basic sound
telling you're making a query before you
can do the rising inflection.
It's interesting.
Is it true in French?
Quoi?
The land of the silent letter.
Anybody know what's the equivalent of w-h words
in your native language?
AUDIENCE: N.
MARVIN MINSKY: What?
AUDIENCE: N.
MARVIN MINSKY: N?
AUDIENCE: Yeah.
MARVIN MINSKY: In what?
AUDIENCE: Turkish.
MARVIN MINSKY: Really?
They all start with n?
Wow.
Interesting.
Maybe the infants have an effect on something.
Do questions in Turkish end with a rise?
AUDIENCE: Yeah.
So only the relevant w-h questions--
OK, all questions end in kind of an inflection.
But normally, you have a little kind of little word
that you would put at the end of any sentence
to make it into a question, except for the w-h questions,
which are standalone one.
You don't them.
MARVIN MINSKY: Yes, you'd say, this is expensive?
They don't need the w-h if you do enough of that.
Huh.
So question, is that in the brain at birth?
AUDIENCE: Is that pattern mirrored in English where
you can say, is this expensive?
But if you can say how expensive is this
without that rising intonation.
It mirrors using the separate word,
but you don't need that separate word if it's an end word.
AUDIENCE: But if you're saying how expensive
is this without the question inflection,
it almost sounds like you're making
a statement about just how ridiculously expensive it is.
Like you're going, how expensive is this
versus how expensive is this?
MARVIN MINSKY: Well, I should let you go.
5.0 / 5 (0 votes)