11. Mind vs. Brain: Confessions of a Defector

MIT OpenCourseWare
4 Mar 2014108:31

Summary

TLDR在这段视频中,David Dalrymple,一位从MIT媒体实验室转至哈佛大学生物物理学博士项目的研究者,分享了他从AI领域转向神经科学的经历和洞见。他讨论了神经科学与AI之间的联系,并提出了一个观点:尽管我们对宇宙的理解已经相当深入,但对于意识和认知的本质,我们知之甚少。Dalrymple将当前神经科学的研究比作哥白尼时代的天文学,认为我们正处在一个即将有重大突破的前夕。他介绍了多种现代神经科学工具,包括光遗传学和多光子显微镜,这些工具使得科学家能够以前所未有的精确度控制和观察大脑活动。此外,他还探讨了关于神经网络的动态和复杂性,以及如何通过这些工具来研究如秀丽隐杆线虫(C. elegans)这样的模型生物的神经系统。Dalrymple的研究目标是建立一个详细的神经网络模型,以解释C. elegans的行为,并希望通过这项工作为理解更复杂的大脑提供基础。

Takeaways

  • 📚 大卫·达尔林普尔(David Dalrymple)曾是AI领域的研究者,后转向神经科学领域,目前在哈佛大学生物物理学攻读博士学位,专注于研究线虫(C. elegans)的行为和神经网络。
  • 🏛️ 达尔林普尔认为,尽管目前对神经科学的研究还处于初级阶段,但随着新技术的出现,如光遗传学(optogenetics)和多光子显微镜(multiphoton microscopy),未来几十年内有望取得突破性进展。
  • 🔬 通过光遗传学技术,科学家可以精确控制特定神经元的活动,这项技术通过基因工程将光敏感的离子通道引入神经元,利用光照来激活或抑制这些神经元。
  • 🧬 达尔林普尔提到,通过基因编辑技术,研究者可以在不伤害线虫的情况下,模拟神经元缺失或改变其活动,以观察线虫行为的变化。
  • 💡 他强调了数学和计算模型在理解神经系统中的重要性,认为未来可能会有新的数学理论出现,帮助我们更好地理解大脑的工作机制。
  • 🌐 达尔林普尔讨论了神经网络的规模自由特性(scale-free networks),这可能与人类意识和认知的某些方面有关。
  • 📈 他提到了神经科学和人工智能之间的联系,以及如何通过研究简单的生物模型(如线虫)来理解更复杂的大脑功能。
  • 🧠 达尔林普尔认为,要完全理解人类大脑的工作原理,可能需要从单个神经元的活动到整个大脑网络的动态进行全面的研究。
  • 🤖 他提到了人工智能的发展,以及如何通过模仿大脑的工作原理来设计更先进的AI系统。
  • 🔬 达尔林普尔讨论了神经科学实验中使用的一些技术,包括如何通过操纵神经元来研究动物的行为反应。
  • ⚙️ 他还提到了未来技术可能带来的风险,包括生物工程和基因编辑技术可能被滥用的情况。

Q & A

  • 大卫·达尔林普尔(David Dalrymple)在演讲中提到的'Mind vs. Brain--Confessions of a Defector'是什么意思?

    -这是他演讲的标题,意在表达他从人工智能(AI)领域转向神经科学领域的过程。他自称是“变节者”,因为他过去是AI领域的研究者,但现在他转而在哈佛大学作为生物物理学的博士生,专注于研究线虫的神经系统,试图了解它们是如何“思考”的。

  • 大卫·达尔林普尔提到他是如何看到神经科学和人工智能之间的关系的?

    -他认为神经科学和人工智能都在尝试理解思维是什么。他将科学比作一个塔,底部是物理学,上面是化学、生物学,再上面是神经科学。而在计算机科学领域,从计算理论开始,然后是软件工程和人工智能。这两个领域都在努力理解思维的实质。

  • 大卫·达尔林普尔为什么从AI领域转向研究线虫?

    -他提到在AI领域他并没有太多好的想法,虽然别人认为他的想法很好,但这让他感到害怕,所以他稍微退缩了。后来他发现了一个非常酷的神经科学问题,并决定离开MIT的AI领域,转而去哈佛研究线虫。

  • 大卫·达尔林普尔如何看待人类经验的本质?

    -他认为人类经验主要由意识或认知主导,而我们对此知之甚少。他提到认知科学(cog sci)是连接神经科学和人工智能的领域,但目前还相当模糊。

  • 在演讲中,大卫·达尔林普尔提到了哪些科学领域的层级结构?

    -他提到了一个层级结构,从基础的物理学开始,然后是化学、生物学,再到神经科学。在计算机科学领域,从计算理论开始,接着是软件工程和人工智能。

  • 大卫·达尔林普尔如何描述他目前的研究工作?

    -他目前正在使用多光子显微镜和光遗传学工具来精确控制和测量线虫的神经元活动。他希望能够建立一个模型,模拟线虫的302个神经元如何相互作用以及如何影响行为。

  • 线虫C. elegans的神经系统有什么特别之处?

    -线虫C. elegans是唯一已知完整连接组(connectome)的生物,这意味着我们知道它所有302个神经元的连接方式。然而,仅仅知道这些连接并不足以理解线虫的行为或计算方式。

  • 大卫·达尔林普尔提到了哪些技术可以用来研究神经系统?

    -他提到了多光子显微镜、光遗传学(使用如channelrhodopsin这样的光敏感通道)和GCaMP这种荧光蛋白,这些技术可以用来激活或抑制特定的神经元,测量神经元的活动和钙浓度或膜电位。

  • 大卫·达尔林普尔如何看待神经科学研究的未来?

    -他认为神经科学研究的未来将涉及到更复杂的生物体,如斑马鱼、果蝇、小鼠、猫、狗、猴子,最终是人类。他预见了一种新的数学洞察力的出现,这将类似于微积分的发现,并将帮助我们理解生物系统中复杂网络是如何形成和执行计算的。

  • 在演讲中,大卫·达尔林普尔提到了哪些科学史上的人物?

    -他提到了哥白尼(Copernicus)和牛顿(Newton),以及恩斯特·卢瑟福(Ernest Rutherford),这些都是科学史上的重要人物。

  • 大卫·达尔林普尔为什么认为生物学在很大程度上仍然处于“集邮”阶段?

    -他认为生物学在很大程度上仍然处于描述和分类的阶段,而不是像物理学那样通过方程式精确地描述事物的工作方式。他指出,当涉及到大脑时,仅仅将事物分类并不足以理解其作为一个动态系统是如何与周围环境互动的。

  • 大卫·达尔林普尔提到了哪些关于线虫行为研究的内容?

    -他提到线虫在特定的实验条件下会展现出特定的行为,如omega转或反转等。他的目标是创建一个虚拟的线虫模型,当置于这些条件下时,能够展现出与物理线虫相同的行为。

  • 大卫·达尔林普尔如何看待神经科学的实验方法?

    -他认为传统的神经科学实验方法,如使用电极测量大脑对刺激的反应,虽然有价值,但只能提供有限的信息。他正在探索使用光遗传学和多光子显微镜等新技术来进行更精确的神经控制和监测。

  • 大卫·达尔林普尔提到了哪些关于线虫神经系统的基因控制?

    -他提到线虫神经系统可能涉及大约100个基因,这些基因控制着如通道和转运蛋白等关键的神经功能。

  • 大卫·达尔林普尔如何看待线虫的生命周期和遗传操作?

    -他认为线虫的生命周期非常短,只有四天,而且它们是自育的雌雄同体生物,这意味着它们可以自我克隆。此外,线虫对低温冷冻的抵抗力使其可以在液氮中冷冻多年后仍能存活。

  • 大卫·达尔林普尔提到了哪些关于线虫行为的变异?

    -他提到,当线虫的某些基因发生突变时,会导致网络结构的变化,有时这会导致某些神经元无法激发,从而成为信号传递的障碍。

  • 大卫·达尔林普尔如何看待线虫神经系统的复杂性?

    -他认为线虫的神经系统虽然只有302个神经元,但每个神经元都非常专业化,执行特定的任务。他希望通过研究线虫的神经系统来找到控制行为的基本神经机制。

  • 大卫·达尔林普尔提到了哪些关于线虫神经系统的实验挑战?

    -他提到了在实验中需要同时控制线虫的感觉神经元和运动神经元,以便在虚拟环境中模拟线虫的行为。他还提到了需要开发新的算法来追踪和解析在快速变化中的线虫神经系统的活动。

  • 大卫·达尔林普尔如何看待神经科学研究的未来趋势?

    -他认为未来的神经科学研究将更多地依赖于新的技术和方法,如光遗传学和多光子显微镜,这些技术将使我们能够以前所未有的精度和控制力来研究神经系统。

Outlines

00:00

📚 教育资源共享与AI的未来

David Dalrymple在演讲中首先感谢Marvin,并介绍他的演讲主题为'Mind vs. Brain--Confessions of a Defector'。他曾是AI领域的研究者,专注于寻找适合构建AI的计算模型。但他逐渐对AI领域感到不安,转而投身于神经科学领域,在哈佛攻读生物物理学博士学位,研究线虫的认知方式。他提出,尽管目前对意识或认知的本质尚不清楚,但认知科学作为连接神经科学和AI的桥梁,尽管目前还不够明确,但未来有望取得突破。

05:04

🧠 神经科学与AI的融合

David讨论了神经科学与AI之间的关系,以及它们如何与科学史和科学的分类系统相联系。他以数学为起点,建立了一个科学层级塔,从物理到化学、生物学,再到神经科学,以及计算机科学的发展,从理论计算到软件工程和AI。他强调了对思维本质理解的追求,并提出了当前神经科学和AI领域的挑战,包括对大脑的隐喻和我们对人类经验所知甚少。

10:05

🔬 神经科学的实验方法

David批评了传统的神经科学实验方法,指出它们通常缺乏对大脑动态系统的考虑。他提到了Hubel和Wiesel的实验,这些实验虽然为我们理解哺乳动物视觉奠定了基础,但并未揭示不同大脑区域之间的联系。他还讨论了现代神经科学中使用的一些新技术,如通过特定通道进行神经元的激活和抑制,以及多光子显微镜的使用。

15:08

🐛 C. elegans作为模型生物

David专注于研究C. elegans,这种线虫是神经科学研究中一个非常著名的模型生物。他介绍了如何使用基因工程技术和激光来精确控制和观察线虫的神经活动。他还提到了GCaMP这种荧光蛋白,它可以作为神经活动的指标。David的目标是利用这些工具来构建一个完整的神经网络模型,以理解线虫的行为。

20:09

🔬 神经科学的未来

David讨论了神经科学在未来20到30年内的发展前景。他认为,一旦解决了C. elegans的神经网络问题,接下来的目标将是研究更复杂的生物,如斑马鱼、果蝇、小鼠、猫、狗、猴子,最终是人类。他还提到了如何验证神经科学模型的有效性,包括模拟线虫的行为和对特定神经元的操控。

25:09

🧬 基因与神经网络的适应性

David回答了关于基因和神经网络适应性的问题。他指出,C. elegans的神经网络在遭受破坏时不会像哺乳动物那样适应和恢复。他还提到了关于神经发育和可塑性的问题,以及如何在模型中捕捉这些动态变化。

30:10

📈 神经活动的测量与控制

David讨论了使用多光子显微镜和基因表达技术来测量和控制神经活动的可能性。他解释了如何通过增加激光的数量来提高实验的并行性和速度,并强调了这些技术在研究大脑方面的巨大潜力。

35:14

🧵 神经连接与学习

David探讨了关于神经连接和学习的问题,包括神经网络的可塑性和如何在模型中捕捉这些特性。他还提到了关于神经发育的问题,以及如何在实验中观察和理解这些过程。

40:15

🧪 神经科学的伦理和风险

David和听众讨论了神经科学实验的伦理问题,特别是关于使用病毒来控制人类行为的可能性。他还提到了一些关于寄生虫影响宿主行为的有趣案例,以及这些现象对神经科学研究的启示。

45:16

🌐 互联网的安全性

在讨论的最后部分,David和听众转向了互联网的安全性,讨论了为什么互联网尚未被病毒彻底摧毁,以及未来可能面临的风险。他们还讨论了进化、基因编辑和科学实验的伦理边界。

Mindmap

Keywords

💡MIT OpenCourseWare

MIT OpenCourseWare 是麻省理工学院的一个项目,旨在通过互联网免费提供高质量的教育资源。在视频中,它被提及作为支持和资源获取的途径,体现了教育开放性和知识共享的理念。

💡AI-ist

AI-ist 在视频中指的是专注于人工智能领域的研究者。David Dalrymple 博士曾经是一位 AI-ist,但后来转向了神经科学领域,这一转变体现了跨学科研究的可能性和重要性。

💡计算模型

计算模型在视频中被提及,指的是使用数学和逻辑来模拟和理解复杂系统,如人工智能和神经系统。David 讨论了构建计算模型以模拟 C. elegans 行为的想法,这关系到视频的核心主题——探索意识和认知的计算基础。

💡神经科学

神经科学是研究神经系统和大脑的科学,包括它们的发展、结构、功能、遗传学以及与行为的关系。视频中,David 从 AI 领域转向神经科学,特别是研究线虫 C. elegans 的神经网络,突显了神经科学在理解认知和意识中的核心作用。

💡C. elegans

C. elegans 是一种线虫,被广泛用作生物学研究的模型生物。在视频中,David 详细讨论了使用 C. elegans 来研究神经网络和行为,因为这种生物的神经系统相对简单,且完全映射了其神经网络,即所谓的 connectome。

💡connectome

Connectome 是指生物体中所有神经连接的完整映射。在视频中,C. elegans 的 connectome 被特别提及,因为它是已知的唯一具有完整神经连接映射的生物,这为理解其行为和认知提供了独特视角。

💡光遗传学

光遗传学是一种结合了遗传学和光学的技术,允许研究者通过光来控制特定神经细胞的活动。在视频中,David 提到了使用光遗传学工具来精确控制和测量 C. elegans 的神经活动,这是研究神经系统功能的强大工具。

💡意识

意识在视频中被探讨为人类经验的核心特征,是认知科学和神经科学研究的关键目标。David 讨论了意识在科学史和科学分类学中的地位,以及如何通过研究简单的生物模型来接近理解意识的本质。

💡计算理论

计算理论是计算机科学的基础,涉及算法和计算过程的数学理论。在视频中,David 从计算理论的角度出发,探讨了如何通过构建更符合生物神经系统的计算模型来推进人工智能的发展。

💡科学史

科学史是研究科学发展历史的过程,包括科学理论、实验和科学实践的演变。视频中提到科学史,以展示我们对思想和大脑关系理解的进步,并将当前的神经科学研究比作哥白尼时代的天文学革命。

💡动态系统

动态系统是随时间演变的系统,其状态可以由一组微分方程描述。在视频中,David 将神经网络比作动态系统,强调了理解大脑功能时考虑其动态交互的重要性,这与视频的主题——探索大脑和意识的工作方式紧密相关。

Highlights

MIT OpenCourseWare 通过 Creative Commons 许可证提供免费的高质量教育资源。

David Dalrymple 从 AI 领域转向神经科学,目前在哈佛大学生物物理学攻读博士学位,研究线虫的认知方式。

Dalrymple 讨论了神经科学与 AI 之间的联系,并将其放在科学史和科学分类学的背景下。

他将自己定位为数学家,并从数学的角度看待所有科学领域。

Dalrymple 提出了一个观点,即我们对宇宙的理解已经达到了一定层次,但对于意识或认知的工作原理却知之甚少。

他提到了认知科学作为连接神经科学和 AI 的桥梁,但目前这个领域还相当模糊。

Dalrymple 将我们对思维和大脑关系的理解比作哥白尼时代的天文学,认为我们正处在一个新时代的黎明。

他提出了一个关于认知科学的新数学洞见,可能与微积分的发现同等重要,这将帮助我们理解生物系统中网络是如何形成并执行复杂计算的。

Dalrymple 讨论了无标度网络的概念,以及它们在神经科学和社会学中的应用。

他引用了 Ernest Rutherford 的名言,将科学分为物理学和集邮活动,暗示生物学仍然在很大程度上是“集邮”。

Dalrymple 强调了生物物理学的重要性,以及它如何帮助我们超越单纯的生物观察,进入更深入的动态系统理解。

他描述了传统的神经科学实验方法,包括使用电线和刺激来测量大脑活动,这种方法的局限性。

Dalrymple 介绍了光遗传学技术,这是一种新的神经科学工具,允许研究人员通过光来激活或抑制神经元。

他讨论了多光子显微镜技术,这是一种能够高精度定位和操作单个神经元的技术。

Dalrymple 提到了 GCaMP 这种荧光蛋白,它可以作为神经活动的指标,通过测量钙离子的浓度来反映神经元的激活情况。

他展望了神经科学在未来几十年的发展前景,包括从线虫到更复杂的生物体如小鼠和人类的研究路径。

Dalrymple 强调了解决线虫模型的挑战,包括模拟其 302 个神经元的行为,并与实际行为进行比较。

他讨论了神经可塑性和学习过程中的连接组变化,以及这些变化如何影响对大脑功能的理解。

Dalrymple 描述了使用光遗传学和多光子成像技术在神经科学研究中的潜力,以及这些技术如何帮助我们更好地理解大脑。

Transcripts

play00:00

The following content is provided under a Creative

play00:02

Commons license.

play00:03

Your support will help MIT OpenCourseWare

play00:06

continue to offer high-quality educational resources for free.

play00:10

To make a donation or to view additional materials

play00:12

from hundreds of MIT courses, visit MIT OpenCourseWare

play00:16

at ocw.mit.edu.

play00:22

DAVID DALRYMPLE: Thanks, Marvin.

play00:24

I decided to call my talk "Mind vs. Brain--

play00:27

Confessions of a Defector."

play00:29

I used to be an AI-ist.

play00:32

My thesis was reviewed by Marvin.

play00:34

I worked in the Media Lab.

play00:35

I thought about models of computation

play00:37

that might be more suited to building AIs.

play00:42

Really, I didn't have that many good ideas,

play00:44

although people thought they were good ideas.

play00:46

And this kind of scared me.

play00:47

So I backed away a little bit.

play00:49

And then I found a really cool problem

play00:52

in the area of neuroscience.

play00:55

And I've now left AI at MIT.

play00:59

And I'm a PhD student in biophysics at Harvard

play01:02

And I am working on worms.

play01:05

And I'm trying to figure out how they think, to the extent

play01:07

that they do.

play01:09

And so I wanted to sort of go over,

play01:12

not really the details of neuroscience

play01:13

or the details of my work.

play01:15

I mean, I'm going to do sort of what Marvin does.

play01:17

I'm going to talk for a little bit.

play01:18

And then we can talk about whatever you want.

play01:20

And if you want to get into the details, that's great.

play01:22

But first, I just wanted to kind of give an overview of,

play01:25

from my perspective, where I see neuroscience and AI

play01:28

sort of fitting in with each other

play01:30

and with the larger context of sort of the history of science

play01:33

and the taxonomy of science.

play01:36

So I sort of self-identify as a mathematician,

play01:41

to the extent that people have sort of discipline identities,

play01:45

like gender identities, or racial or cultural identities.

play01:48

My discipline identity is math.

play01:51

And so I see everything as sort of springing out from that.

play01:54

So on one side, you have sort of the scientific tower,

play01:58

where you have physics.

play02:00

And on top of physics, you put chemistry.

play02:02

And on top of chemistry, you put biology.

play02:05

And on top of biology, you have neuroscience.

play02:10

And there is also the sort of computer science,

play02:14

where you start from the theory of computation.

play02:19

And then you have sort of software engineering, and then

play02:27

AI.

play02:28

And in both cases, what you're really stretching toward

play02:32

is an understanding of what thought is.

play02:35

And we sort of got to some success

play02:41

in sort of figuring out what the universe is,

play02:45

at least down to a certain level of description.

play02:47

[LAUGHTER]

play02:50

I could turn on a blackboard light.

play02:52

MARVIN MINSKY: Is there one?

play02:53

DAVID DALRYMPLE: Yeah, there is a blackboard light.

play02:55

MARVIN MINSKY: I have to correct this down here.

play02:57

You don't have transparencies for this?

play02:59

DAVID DALRYMPLE: No.

play03:00

I don't have-- I don't know where

play03:01

to buy transparencies anymore.

play03:02

[LAUGHTER]

play03:03

MARVIN MINSKY: I have some transparencies.

play03:05

DAVID DALRYMPLE: That would have been good to know.

play03:07

[LAUGHTER]

play03:10

But--

play03:10

MARVIN MINSKY: You can't use them.

play03:11

DAVID DALRYMPLE: What's that?

play03:12

MARVIN MINSKY: But you can't use them.

play03:14

[LAUGHTER]

play03:19

DAVID DALRYMPLE: But what we're really trying to get to

play03:21

is sort of this fundamental question

play03:22

of what is human experience?

play03:25

And human experience is sort of dominated

play03:27

by consciousness, or cognition, or whatever

play03:29

you want to call it.

play03:30

And we really don't know what's going on there.

play03:32

We have something called cog sci.

play03:36

And it definitely connects to both neuro and AI,

play03:40

but it's pretty fuzzy right now.

play03:43

And a lot of people take the metaphor of transistors

play03:49

in talking about the brain.

play03:51

And that, oh, as neuroscientists we

play03:53

spend a lot of time looking at the details of what

play03:55

happens in the nonlinear regime of this sort

play03:58

of neurotransistor.

play03:59

But it really doesn't matter, because what matters

play04:01

is when you put the things together and so on,

play04:03

which is a good metaphor.

play04:04

But a metaphor that I also like, and I see used less often,

play04:08

is that we're sort of right now looking at thought

play04:12

and its relation to the brain.

play04:14

I think we're sort of where Copernicus was when he

play04:18

was thinking about the planets.

play04:19

In a sense, we had sort of the right basic idea.

play04:22

We have the idea that thought happens in brains.

play04:25

And Copernicus had the idea that planets

play04:27

orbit the Sun, which at the time was a new idea for Copernicus.

play04:30

And in relative scheme of things,

play04:31

it's kind of a new idea for us that thought happens in brains

play04:34

and happens by electrical impulse.

play04:36

But Copernicus didn't have gravity.

play04:39

He didn't have Newton.

play04:40

And so in describing the orbits, he

play04:42

had all of these little corrections,

play04:44

and epicycles, and deference in trying

play04:47

to make sense of the things that would later all follow

play04:50

from this very simple theory of calculus and of gravity,

play04:55

but that was yet to be discovered.

play04:57

And so I think that there's something

play04:59

that we're getting to in the realm of cognitive science,

play05:03

some sort of new mathematical insight that I think

play05:06

will be on the same par as discovery

play05:09

of calculus in terms of how networks

play05:12

emerge to perform complex computation in living systems.

play05:19

And the way that I think about that is when we really get down

play05:22

to the essence of calculus, it's about what

play05:24

happens in the limit of things sort of acting in similar ways

play05:29

as you cut them into smaller and smaller pieces

play05:31

and have more of those pieces.

play05:34

And what we're looking at both in

play05:37

neuroscience, and in sociology to a lesser extent--

play05:41

because fewer people are doing quantitative things there--

play05:43

but we're looking at scale-free networks, where

play05:46

as you partition the network, at different sizes of partitions

play05:50

you get the same sort of in-degrees and out-degrees.

play05:53

And we really don't know what we're

play05:55

talking about when we go into scale-free networks.

play05:57

But it seems that there's something there

play05:59

that relates to sort of the mysteries--

play06:03

the things that we're bumping into that I feel

play06:04

are the same sorts of things that people

play06:06

were bumping into shortly before we figured out calculus.

play06:10

So anyway, that's sort of my big philosophical spiel.

play06:17

I also wanted to tie into this quote

play06:21

from Ernest Rutherford, who's kind of one of the greatest

play06:23

curmudgeons of science history, who

play06:26

once said, "All science is either physics

play06:28

or stamp collecting."

play06:31

Either you're writing down the equations

play06:33

and you know exactly how things work or you're just sort

play06:36

of saying, oh, this looks nice.

play06:37

Let me write down what it looks like and where I found it.

play06:41

And biology is definitely still largely

play06:44

in the stamp collecting realm.

play06:45

And it doesn't have to be.

play06:46

It's not the nature of studying living systems, which

play06:49

is the reason that we have biophysics.

play06:50

It's the reason that I am in a biophysics program.

play06:53

But there's sort of this cultural tradition in biology

play06:58

that goes back to sort of Darwin, where you just sort

play07:02

of look around the world.

play07:03

You write down what you see.

play07:04

And it's a time-honored tradition.

play07:07

And it certainly gets you pretty far.

play07:10

But it seems to break down when we look at the brain

play07:13

because you start cataloging these things that

play07:17

are so minute.

play07:19

And you start cataloging them in isolation,

play07:22

instead of considering them as dynamical systems

play07:25

that interact with their surroundings.

play07:27

And it seems that you really have a hard time putting

play07:29

those pieces back together once you've sort of collected them

play07:33

in separate observations.

play07:35

So that's another point I wanted to make.

play07:38

And then I was just going to talk a little bit

play07:40

about why I think things are changing, especially

play07:44

in the neurodomain.

play07:46

There's the sort of classic way that you do neuro experiments,

play07:51

is you have some sort of furry creature.

play07:56

And you present some sort of stimulus.

play08:04

And then you stick a wire somewhere into the brain.

play08:12

And you measure what happens depending on the stimulus.

play08:15

And you don't know what you're looking at.

play08:16

But you know that you're looking at something.

play08:18

And you can get some sort of idea

play08:20

as to what things are similar.

play08:22

Hubel and Wiesel did these great experiments

play08:25

where they stuck electrodes into cats' visual cortex

play08:29

and basically just kind of waved their arms around and saw

play08:31

that some of the neurons were orientation sensitive.

play08:34

And that's the foundation for much of our current knowledge

play08:38

and research about vision in mammals.

play08:42

But it doesn't tell you how the different things that you're

play08:45

looking at relate to each other.

play08:47

And it gives you only the merest glimpse.

play08:50

And even in the most dense sort of electrode applications,

play08:55

you're going to get a maximum of thousands

play08:59

or maybe 10,000 signals out of this at once.

play09:03

And most likely none of those neurons that you're looking at

play09:06

are going to be anywhere near each other

play09:07

on the scale of synapses.

play09:09

And so you're essentially just sampling a population.

play09:13

And, in fact, there's a lot of research

play09:14

in this field that goes under the rubric of population

play09:17

dynamics, where you're sort of just

play09:20

looking at things in the aggregate.

play09:23

I heard a nice little metaphor actually just today

play09:26

in an unrelated class about populations,

play09:31

where if you're, for instance, examining

play09:33

a population of people walking down Fifth Avenue in New York

play09:38

City, you can measure things like the average rate

play09:41

that people are walking down.

play09:42

But you're never going to capture something

play09:44

if you're just looking at flow across everything,

play09:47

like every 100 meters people will turn into a shop

play09:51

and stop for a few minutes and then come back down

play09:53

and start again.

play09:55

So you're only getting sort of the very broad strokes.

play09:59

Or otherwise, you're taking the neuron

play10:03

out of its natural context.

play10:05

And you're just saying, OK, here I have a neuron in a dish,

play10:13

whatever, in some sort of growth medium.

play10:15

And I'm just going to probe it and see

play10:18

what happens when I stimulate this neuron

play10:20

in different places.

play10:21

And then you're likely exploring parts of the face base

play10:28

that this neuron would never experience in the system.

play10:30

And you're getting no indication as to which parts of the face

play10:33

base it actually is in or how that would relate

play10:36

to other parts of the system.

play10:38

And when you're dealing with this complex information

play10:40

processing network that's pretty critical.

play10:42

So that's sort of the classical state of neuroscience,

play10:46

which basically plateaued I think about 30 years ago,

play10:49

which is the reason why people like Marvin

play10:53

are pretty frustrated with it.

play10:54

But in more recent times, we've started

play10:59

to get some different ways of looking at neural systems

play11:02

that I think are really exciting.

play11:04

This is why I decided now is a good time

play11:07

to be a neuroscientist.

play11:09

So now if we zoom in to sort of a physicist's spherical neuron,

play11:18

it has a cell bilayer.

play11:22

And neurons are powered by channels,

play11:27

which interrupt the bilayer and can pass certain types of ions.

play11:33

And naturally ions carry charge.

play11:35

And some channels, depending on the concentrations

play11:39

and the voltages, will have positive ions

play11:41

flowing in; positive ions flowing out; negative in, out.

play11:44

And these dynamics basically form

play11:48

the basis for neural activity, and action potentials,

play11:53

and everything.

play11:54

And we found some of these channels are things that

play11:59

functionally are like channels in prokaryotes--

play12:01

that means basically single-celled organisms,

play12:03

in algae, in Archaea, in bacteria even--

play12:08

that have really useful properties.

play12:10

For instance, there's one called channelrhodopsin,

play12:14

which turns on if and only if there's

play12:18

blue light impinging upon it.

play12:20

And this was discovered about 15 years ago.

play12:23

And about 10 years ago, it was sequenced.

play12:26

And now with the ability to synthesize genes and introduce

play12:30

them into other organisms, we can take that gene

play12:35

and add it to a neuron, which is not at all where it belongs.

play12:40

Nature would never put this type of channel in a neuron

play12:43

because there's no reason for neurons

play12:45

to be sensitive to light unless they're

play12:47

photoreceptors in the retina.

play12:49

But now that we have this blue light activated channel,

play12:55

we can point a laser at the cell.

play12:57

And when we turn it on, if it's the right wavelength,

play12:59

the cell gets activated.

play13:00

When we turn it off, the effect disappears.

play13:03

And so you can now do these very precise sorts of perturbations

play13:07

and see what happens.

play13:08

And because it's a genetically expressed channel, what

play13:11

a lot of people did initially is they would just

play13:13

express the channel in a certain class of cells

play13:17

that has a specific promoter that's been discovered.

play13:20

And then just use a wide-field blue lamp,

play13:23

light up the entire brain.

play13:25

And then only that class of cells

play13:27

in which you express a light-sensitive channel will

play13:28

turn on.

play13:29

So you can see what does this class of cells actually do?

play13:32

There is a similar channel that's

play13:34

activated by yellow light that removes--

play13:39

well, actually it introduces negative ions,

play13:42

introduces chlorine.

play13:44

And that causes the cell to hyperpolarize or deactivate.

play13:49

So you can inhibit--

play13:50

selectively inhibit populations based on genetics.

play13:54

And this allows users to do something like a knockout.

play13:59

A lot of people do knockout mice, where

play14:00

you remove some class of cells.

play14:02

They use the behavioral deficit.

play14:03

But you can do it without any--

play14:06

basically, you can do it with a positive control

play14:08

because if you aren't shining yellow light into the skull,

play14:12

then it's just like a regular mouse.

play14:14

And this is really helpful for determining those effects.

play14:19

But even so, this is still population dynamics.

play14:22

You're still just talking about some broad class of cells.

play14:25

All pyramidal neurons, all basal ganglia,

play14:29

here's what happens when you remove them.

play14:31

So there's another piece of it.

play14:35

There's another piece of this puzzle, which

play14:38

is multiphoton microscopy.

play14:41

And this is something that--

play14:42

how many people know multiphoton or two-photon?

play14:46

OK.

play14:46

So I recently learned about this too.

play14:49

And it's really cool because this is the sort of thing

play14:52

that when you're first like eight years old

play14:54

and you hear about lasers, there's a sort of thing

play14:57

that you would imagine that you would do with lasers.

play14:59

And then you learn a bit more about lasers

play15:01

and you realize that lasers don't work that way.

play15:03

And then you learn more about lasers, you're like,

play15:05

whoa, actually you can do that with lasers.

play15:07

And what you do is you have one laser.

play15:12

It's a femtosecond laser, which is necessary

play15:14

because of the ridiculous synchronization that's

play15:17

required in doing this.

play15:18

Not only does it have to be a femtosecond laser,

play15:20

but you actually--

play15:21

even though it's two-photon, you're using two beams,

play15:23

you can't have two lasers because they won't

play15:25

be well enough synchronized.

play15:27

So you have to split a laser into two beams.

play15:33

And then you focus--

play15:34

using fancy optics that I don't know enough optics to draw--

play15:38

those two beams onto a single point inside your sample.

play15:43

And the wavelength of this laser is twice the wavelength

play15:47

necessary to excite your channelrhodopsin

play15:50

or your halorhodopsin.

play15:52

And what happens is there's a small, but nonvanishing

play15:58

probability that two photons from the two branches of this

play16:05

will arrive at exactly the same point in not quite Planck time,

play16:12

but in sort of molecular excitation time.

play16:16

And if those two photons arrive close enough to each other,

play16:19

they'll have exactly the same effect

play16:20

on the state of the molecule as a photon

play16:23

with half the wavelength, so twice the energy,

play16:25

because those two photons each deliver that amount of energy,

play16:29

and so it gets doubled.

play16:30

So this only can happen at exactly the spot

play16:34

where the two beams converge.

play16:36

So you get this extremely selective,

play16:40

not only z slicing, but also an even more selective x and y

play16:45

slicing.

play16:46

So you can use this to target an individual neuron.

play16:50

And you can do it repeatedly.

play16:54

You can do it reliably.

play16:56

Any space in your working volume you

play16:59

can target using basically acousto-optic deflectors, which

play17:03

again I can't draw.

play17:05

So they're going to be black boxes.

play17:07

But you can do it very fast.

play17:08

And it's expensive.

play17:09

But you can direct these at hundreds of Hertz.

play17:13

So you can essentially write to anywhere in the brain.

play17:17

You can turn things off or on as you

play17:19

wish as long as you know the locations.

play17:22

And the other final piece of this optogenetics puzzle

play17:27

is there's a fluorescent protein called GCaMP,

play17:35

which is sort of like a GFP, but with a calmodulin attached

play17:41

to it.

play17:43

And the calmodulin binds calcium ions.

play17:46

And the GFP is a green fluorescent protein.

play17:49

But when the calmodulin isn't bound to a calcium,

play17:52

it sort of hangs out here and disturbs

play17:54

the conformation of the GFP so that it can't fluoresce.

play17:57

But when this binds a calcium, then the GFP fluoresces green.

play18:07

And calcium is one of the major signalings for neuron firing.

play18:11

Especially for neurotransmitter release,

play18:13

you need an influx of calcium.

play18:15

So this basically tells you whether the neuron is active.

play18:19

And as if that weren't enough--

play18:20

because it is a second messenger--

play18:22

just this year, there was a protein developed

play18:24

that's a membrane protein, which is also genetically encoded.

play18:27

All of these can be just engineered

play18:30

into a line of animals.

play18:31

And then you don't need to worry about it anymore,

play18:33

no injections or anything.

play18:36

So there's another one that-- originally, nature

play18:40

intended this as a proton pump.

play18:45

It's archaerhodopsin.

play18:46

It's a light-sensitive proton pump.

play18:49

But what they were able to do is to silence the proton

play18:51

pumping, basically to disable that aspect of the protein's

play18:56

function.

play18:57

But then they discovered that it actually then

play19:01

has a fluorescence which is proportional to the voltage

play19:03

across the membrane that it would

play19:05

be moving those protons across.

play19:07

So, in sum, you can activate neurons.

play19:11

You can inhibit neurons.

play19:12

You can measure the calcium concentration or activation.

play19:16

And you can measure the voltage.

play19:17

And you can do it all, anywhere you want, very fast.

play19:22

So this is basically a toolkit for doing experiments

play19:26

that you could really only dream of with electrophysiology.

play19:29

In particular, the one that I'm working on right now

play19:32

is the worm C. elegans, which is a very well-studied organism.

play19:38

And, in fact, it's the only organism for which we actually

play19:41

know the complete connectome.

play19:44

So a lot of people talk about connectomes.

play19:47

And it's sort of a dark secret of neuroscience

play19:49

that we already have the connectome for C. elegans

play19:51

and we can't do anything with it.

play19:52

Because it turns out that just knowing where--

play19:55

you one neuron here.

play19:58

And it synapses to this neuron here.

play20:01

And this controls the body wall muscles.

play20:04

That doesn't really tell you anything,

play20:05

like maybe this is an inhibitory synapse, maybe it's excitatory,

play20:09

maybe it's non-functional.

play20:10

Maybe it's stronger than the other synapses on that cell,

play20:13

maybe it's weaker.

play20:14

And so it basically gives you very little information

play20:17

to start from if you're trying to understand

play20:18

how this organism thinks or computes.

play20:22

But since we know where all of the neurons

play20:24

are, and there are only 302 of them,

play20:29

it's not crazy to think about using

play20:32

this microscope and these biophysical techniques

play20:35

to actually build a model, pair-wise if need be,

play20:39

all 90,000 pairs, of how every neuron affects

play20:43

the behavior of every other neuron.

play20:46

So that's what I'm working on.

play20:49

I think that actually doing these sorts of observations

play20:53

is something that's never really been done before.

play20:56

And again, to make a ridiculously grandiose

play20:59

comparison, it's kind of like how Newton only

play21:02

was able to do what he did because of Galileo developing

play21:05

the tools to observe phenomena that had never

play21:07

been seen before.

play21:09

And I think that advances--

play21:13

well, this is actually a quote from Sydney Brenner, who

play21:15

was the first person to suggest that you can study this worm

play21:19

and maybe learn something from its nervous system.

play21:23

Advances in science usually come from new techniques,

play21:27

new discoveries, and new ideas in that order.

play21:31

And so now we have the techniques.

play21:33

We're working on the discoveries.

play21:34

And the hope is that it will lead to new ideas.

play21:38

So questions?

play21:43

Yeah?

play21:44

AUDIENCE: So you've given us this overview

play21:46

of the state of the art right now and possibly how it's

play21:49

going to be in five years, but how

play21:51

do you think neuroscience is going to be in about

play21:53

20 years or maybe 30 years?

play21:54

DAVID DALRYMPLE: Well, what we can do right now

play21:56

is this most basic organism that nature has to offer.

play22:01

And the natural thing to do once that gets

play22:05

solved, which we don't know how long it'll take,

play22:08

because we don't know how much detail is really important,

play22:10

but I think that we can solve the worm in three or four

play22:14

years.

play22:15

And then the next step is the zebrafish,

play22:17

which is also optically transparent,

play22:19

which is handy for using these sorts of optical microscopes.

play22:23

But the zebrafish has 100,000 neurons.

play22:25

So it's a big jump in complexity.

play22:27

And it has a lot of the same sorts of brain regions

play22:29

that you see in mammals and even humans,

play22:32

although they often go by different names.

play22:34

But it's a similar structure.

play22:36

You know, it's a vertebrate.

play22:37

And it has eyes, which the worm doesn't.

play22:40

So that would be the next thing to look at.

play22:44

And I think that'll probably take another five years or so.

play22:48

And then maybe Drosophila--

play22:54

bees are pretty complicated.

play22:56

That's the first place where you get something

play22:58

that resembles language.

play23:00

So that might be really interesting,

play23:02

and eventually, mice, cats, dogs, monkeys, and humans.

play23:09

And you know, definitely the path

play23:11

that this goes on in an ideal world

play23:13

is toward taking an individual human brain

play23:18

and turning it into a model.

play23:24

AUDIENCE: What would you consider to be solving?

play23:26

Or how much do we need to know about this?

play23:29

What are the problems there?

play23:31

What are the things that we still have to understand?

play23:35

DAVID DALRYMPLE: The way that I have set up the criteria,

play23:37

there is a big list of publications, basically

play23:42

of behavioral results.

play23:44

And there is a lot of stereotyped behaviors.

play23:46

There is something like 30 or 40 different conditions where

play23:49

you can put the worm in these conditions

play23:51

and they exhibit particular omega turns, or reversals,

play23:56

or things like that.

play23:58

So that's sort of the baseline, say, well,

play24:01

if you put the worm in these, the virtual worm

play24:03

sort of in these conditions in a virtual simulated Petri dish,

play24:06

they exhibit all the same behaviors.

play24:08

That's sort here your first-order check.

play24:10

And then the next step is, well, what

play24:12

happens if you remove a neuron?

play24:16

You pick one of the 302 neurons in a physical worm,

play24:18

you can ablate it with a laser.

play24:20

You can kill a cell specifically.

play24:22

Or you can just inhibit it with a halorhodpsin.

play24:26

Then you can see, does the virtual worm exhibit

play24:28

the same behavioral differences as the physical worm

play24:32

under that condition?

play24:34

And you can also do larger scale sorts of things.

play24:38

You can say, well, what happens if I

play24:40

were to activate this neuron 10 times a second every five

play24:45

seconds?

play24:46

And how would that change things?

play24:48

So you could do lots of different perturbations.

play24:50

And that, to me, is the best way to check that you have

play24:54

what I consider a biologically relevant model.

play24:57

But what you're looking for is not anything

play25:00

on sort of the-- you're looking for observables, basically.

play25:04

And I think that's what you have to do if you're doing science.

play25:09

You have to be looking at what's observable.

play25:11

And right now what's going on inside the synapses

play25:14

is not observable.

play25:15

So I'm not going to be simulating that.

play25:17

And part of the hypothesis--

play25:20

since I'm doing this as a PhD thesis,

play25:21

it has to answer a scientific question.

play25:24

And the question is, can you capture the qualitative aspects

play25:28

of behavior as viewed externally without modeling

play25:32

what's going on at the molecular dynamic level?

play25:36

Yeah?

play25:37

AUDIENCE: My question is on the connectome.

play25:38

So in my understanding of the human brain,

play25:42

I thought that neurons could grow new connections

play25:45

with other neurons.

play25:46

So in that sense, it's like the map

play25:49

of the connections between all pairs of neurons

play25:51

is constantly changing.

play25:52

DAVID DALRYMPLE: Yes.

play25:53

AUDIENCE: So how would that work when

play25:55

we're trying to find the connectome of a more

play25:57

complicated organism whose neurons

play26:00

do make new connections?

play26:01

DAVID DALRYMPLE: So C. elegans, nicely enough, doesn't do that.

play26:07

But you're absolutely right.

play26:09

Mammals do.

play26:10

Mammals do form new connections.

play26:12

And there is also, even in C. elegans,

play26:14

there is a question of development.

play26:17

It's been shown, not conclusively, but fairly

play26:20

convincingly, that electrical activity is not

play26:23

only important for cognition.

play26:25

It's also important for development.

play26:27

And if you introduce genes that basically only break

play26:31

action potential function of a neuron, those neurons,

play26:36

as they develop from birth onward,

play26:40

they don't form the connections that they should.

play26:44

And so there is something going on.

play26:45

There is some computation going on there

play26:47

that's development-specific probably,

play26:50

because in most areas of the brain, once you reach

play26:53

a certain level of maturity, those sorts of processes

play26:58

stop growing.

play27:00

So there is some sort of computation there.

play27:01

And I am explicitly leaving that out

play27:03

because I want to graduate in a reasonable amount of time,

play27:06

saying you know, development, future work.

play27:09

And it is.

play27:09

It's future work.

play27:10

And at the same time as hopefully, this is a success,

play27:13

someone will go look at the zebrafish,

play27:15

and someone will go look and try and figure out

play27:17

how C. Elegans develops from a larval stage

play27:20

to an adult with all 302 neurons,

play27:23

and how they find each other to connect.

play27:25

And that's definitely important, because how the nervous system

play27:29

develops gives us some clue as to what

play27:32

the functional organization is, because the things that develop

play27:36

in concert and sort of stem from the same developmental program,

play27:42

in a sense, probably have the same functions

play27:45

when they're finished developing.

play27:48

But separate from development, there

play27:50

is also this question of learning.

play27:52

And if you do just capture connectome, there is a--

play27:57

it seems to me that there is a possibility

play27:59

that you could wind up doing is sort of capturing

play28:02

a connectome frozen in time.

play28:03

You could wind up with some anteriograde amnesia

play28:07

effect, because if you're missing

play28:10

some aspect of plasticity, on a short-term scale,

play28:12

you would get the same sorts of responses,

play28:14

but you wouldn't get the same sorts of changes over time.

play28:18

So that is a possibility.

play28:21

The way that you can get around that is, if you have tools--

play28:25

again, we're talking 20, 30 years in the future.

play28:28

These are pretty recent.

play28:29

Who knows what we'll have then?

play28:31

If we can visualize something either

play28:34

at a lower level in terms of what's

play28:35

going on with transcription factors

play28:37

or if we can visualize how the axonal processes grow,

play28:41

then we can build models of that in the same way

play28:43

that now we can build models of sort

play28:46

of the steady-state dynamics in the sense

play28:49

of short time-scale dynamics of electrical activity.

play28:53

AUDIENCE: Thanks.

play28:54

DAVID DALRYMPLE: Yeah?

play28:56

AUDIENCE: So how many neurons can you look at at once

play28:59

with that [INAUDIBLE]?

play29:01

DAVID DALRYMPLE: So it depends on how many lasers you have.

play29:04

Number of lasers over 2 equals the number

play29:07

of simultaneous observations.

play29:08

But you can direct the lasers at hundreds of hertz.

play29:13

And so if you want to look at 100 neurons at 30 hertz,

play29:18

you can do that.

play29:19

You just have to multiplex them.

play29:21

And because things, especially in C. elegans,

play29:23

are on a fairly slow timescale, because the C. elegans doesn't

play29:26

have action potentials, or at least they're

play29:29

not thought to be significant for computation, it's doable.

play29:34

As technology gets better, again, you'll

play29:37

be able to scan faster.

play29:39

There is actually people working on just fancier optics tricks

play29:42

that let you scan faster.

play29:44

And also you can just use more lasers.

play29:47

There is nothing to say that you have to be pointing one laser.

play29:51

You can multiplex much faster if you're

play29:53

talking about pulsing the lasers on and off,

play29:55

because they're femtosecond lasers.

play29:56

So the more lasers you have, the better.

play29:59

But ultimately, when we're talking

play30:00

about systems like mouse where you

play30:03

have to penetrate through millimeters of tissue

play30:06

to get to certain regions, it's probably not

play30:09

going to be optical.

play30:13

At least it's not going to be visible light.

play30:16

One potential direction is called magnetoencephalography.

play30:21

When there is a neural current, it induces a magnetic field

play30:24

just by Maxwell's equations.

play30:26

And there is-- right now if you have superconducting magnets,

play30:32

SQUIDs, you can detect the currents

play30:35

of order of 1,000 neurons activating at once.

play30:40

So you have to have, like, that level.

play30:42

But when you're talking about 100 billion neurons

play30:44

in the human brain, that's still pretty impressive.

play30:46

It's pretty impressively fine grained.

play30:48

It's much better than an MRI.

play30:51

And as time goes on, again, hopefully those things

play30:54

will continue to evolve and improve to the point

play30:56

where you can measure what's going on at a very low level.

play31:00

And then on the control side, similarly we

play31:03

have transcranial magnetic stimulation.

play31:06

And that also is rapidly increasing

play31:09

in resolution and accuracy.

play31:14

Sergei?

play31:14

AUDIENCE: Do you ever name the worms?

play31:16

DAVID DALRYMPLE: A friend of mine

play31:18

suggested Ellie for C. elegans, but I haven't come up

play31:23

with any others.

play31:28

Yeah?

play31:29

AUDIENCE: So this question is currently not

play31:31

super well formed, but [INAUDIBLE]

play31:33

DAVID DALRYMPLE: No problem.

play31:34

AUDIENCE: OK, so currently, since you're

play31:36

devoting a few years of your life to this

play31:38

and you're doing a PhD on it, you

play31:39

do think it's important to try to understand the low level

play31:43

specifics of it all to understand

play31:44

the mind and thoughts, right?

play31:46

Is that what's going on?

play31:48

DAVID DALRYMPLE: So I think what I'm

play31:50

trying to establish is a lower bound on what's important,

play31:54

because there is a lot of people out there,

play31:57

like Terry Sejnowski, who argue that what's going on inside

play32:01

the synapse is important.

play32:02

The mechanics of vesicles diffusion is important.

play32:05

And if you're not keeping track of the vesicles,

play32:07

you're really missing the point.

play32:09

And so what I'm trying to do is, at least for one

play32:11

organism, for 30 behaviors of this organism, say,

play32:16

you know what?

play32:17

Vesicle motion is not important for this.

play32:20

And then hopefully once that's established,

play32:23

we can sort of move forward and say, OK, well,

play32:25

maybe the neuron, the different compartments in the neuron

play32:28

aren't important either.

play32:29

Maybe there is some functional units

play32:31

that we can start to consider.

play32:33

But I'm trying to just sort of establish that lower bound.

play32:36

And in addition, I think in C. elegans,

play32:38

where you have a total of 302 neurons, that's

play32:41

on the same order of magnitude as the number

play32:43

of functional regions that we've identified

play32:45

with MRI in human brains.

play32:47

And I think in C. elegans, each of the neurons

play32:49

really is pretty specialized to do a certain job

play32:52

in the organism.

play32:53

So I'm not sure that you could go that much

play32:55

higher in this model system.

play32:57

But at least I'd like to say the neurons are the lowest level

play33:02

that you need to worry about.

play33:03

AUDIENCE: OK, so as a follow up to that--

play33:06

so that's really cool.

play33:07

That actually clarified some things for me.

play33:09

But do you think that work on higher-level stuff

play33:13

is still useful at this point?

play33:15

DAVID DALRYMPLE: Oh yeah, absolutely,

play33:17

but I think that work on higher level stuff

play33:21

is largely the same sort of work that

play33:24

has been possible for a while.

play33:27

I mean, there is certainly an argument

play33:29

that we have way cheaper and more powerful

play33:31

computers than we did when people started doing AI.

play33:35

But I feel like most AI is not--

play33:39

it's not really about scale, you know,

play33:40

unless you're talking about Google-style AI,

play33:42

which I feel like is not really the point.

play33:48

The reason that I moved into neuroscience

play33:49

is because it's clear to me that there is something

play33:53

that you can do now that you couldn't do before.

play33:55

There is something you can see that you couldn't see before.

play33:58

And so there has got to be something

play33:59

that you can learn from that.

play34:02

I don't think that this is the most probable path

play34:08

to intelligence, in the sense, I think

play34:10

there is many, many paths to intelligence.

play34:12

And the collection of all of them

play34:14

that don't involve looking at brains at all,

play34:17

there is a greater probability of success

play34:19

than the collection of all that involve looking at brains.

play34:22

But I think this specific path is

play34:26

the most probable single path.

play34:28

If you were to compare it to hierarchical temporal memory

play34:31

as a specific way that you can go,

play34:34

or if you were to compare it to Bayesian networks

play34:36

as a specific way to go, I think that this is more

play34:40

likely than any specific thing.

play34:42

So it's not that I think that it's the best,

play34:44

but it's certainly the clearest in how to proceed.

play34:47

And I like that.

play34:54

Marvin?

play34:55

AUDIENCE: Is there an estimate of how many genes control

play34:58

the nervous system in the worm?

play35:02

DAVID DALRYMPLE: In the worm, you know, I

play35:04

don't know that number.

play35:05

I think if you're talking about just

play35:09

like channels and transporters, it's probably

play35:13

something like 100, if that.

play35:18

There is a class of genes called unc, for uncoordinated.

play35:23

And when you remove those genes, the worm

play35:25

doesn't really swim very well.

play35:28

And there is about 120 genes in that class.

play35:33

So I think that's sort of roughly the nervous system

play35:36

genes, if you will.

play35:37

AUDIENCE: I've seen estimates for the mammalian brain

play35:41

which are 20,000--

play35:43

DAVID DALRYMPLE: Well, 20,000 is how many genes

play35:45

there are in a human.

play35:46

But maybe all of them were important for the brain.

play35:49

Who knows?

play35:52

Yeah?

play35:53

AUDIENCE: When you're talking about moving

play35:54

some of those neurons, are you talking

play35:55

about doing that in a mature worm that doesn't have any more

play35:58

development, obviously?

play35:59

DAVID DALRYMPLE: Yeah.

play36:00

AUDIENCE: What about doing that in a worm that

play36:02

hasn't developed yet?

play36:03

With that, could you see things develop

play36:06

new connections and new--

play36:08

DAVID DALRYMPLE: Things get screwed up.

play36:11

If you do it in a mammal, things adapt.

play36:15

And you wind up kind of being OK.

play36:19

The worm developmental system is not that complicated.

play36:22

And if you start killing things in the larval stage,

play36:25

it kind of just isn't happy.

play36:28

It'll usually live, but it'll--

play36:31

the neurons that were supposed to go there will just sort

play36:34

of get lost.

play36:35

AUDIENCE: Is there a tipping point for what animal

play36:37

is complex enough to adapt, versus what isn't, like a worm?

play36:41

DAVID DALRYMPLE: So the word for it is non-eutelic.

play36:45

Eutelic means that the network structure is fixed

play36:48

and it won't adapt.

play36:50

And I think you can get up to the level of about a snail.

play36:55

There are some eutelic snails.

play36:57

And then beyond that point, certainly all insects

play37:01

have adaptive neural networks.

play37:04

Yeah?

play37:05

AUDIENCE: Could you just explain to me [INAUDIBLE] the kinds

play37:08

of experiments using short term--

play37:12

I mean, you're getting some channels to [INAUDIBLE]??

play37:24

DAVID DALRYMPLE: So the nearest term thing is actually just

play37:29

to focus on the read out, which is the calcium image read out,

play37:36

to express in all the neurons, which no one has ever done,

play37:39

because there is this-- again, it's sort of a cultural bias.

play37:43

You know, not that I'm unbiased--

play37:45

I have one bias.

play37:46

And biologists have the other bias,

play37:48

which is to isolate the smallest publishable unit

play37:51

and sort of say, OK, I'm going to work on this cell

play37:55

and figure out its function.

play37:57

Anything else is noise.

play37:58

And so you try to minimize the expression of your transgene.

play38:03

And what I'm trying to do is maximize that.

play38:05

I want it in all of the cells, because I

play38:07

want to be able to capture the entire system so that I can

play38:09

treat it as sort of a closed system with well-known inputs

play38:11

and outputs.

play38:13

So in this case, the first thing I'm trying to do

play38:15

is just to express calcium just to do confocal imaging

play38:19

and to see if there is any patterns that pop out.

play38:23

And right now, I'm just sort of in the process

play38:25

of trying to get this gene to actually express

play38:29

in all of the neurons.

play38:31

Yeah?

play38:32

AUDIENCE: So if every worm has the same number of neurons

play38:34

and they're all connected in the same way, what

play38:37

accounts for the functional differences between worms?

play38:39

Say, like, what makes one more uncoordinated than another?

play38:43

DAVID DALRYMPLE: Oh, so when you do have those mutations,

play38:47

you do get different network structure.

play38:50

Well, not all of the time-- sometimes

play38:53

it's not different network structure.

play38:54

Sometimes it's just that the that a certain class of neurons

play38:58

is not excitable, it won't fire because it's

play39:00

missing voltage-gated channels, or something like that.

play39:04

Actually, none of them have voltage-gated channels,

play39:06

but if it's missing receptors, for instance,

play39:10

it'll just sit there.

play39:11

And it'll be a roadblock to signals that are

play39:13

supposed to go through there.

play39:15

So yeah, when you start mutating,

play39:18

that breaks the rule that they're all exactly the same.

play39:22

Yeah?

play39:22

AUDIENCE: Followup question.

play39:23

How certain are you that genetically this kind of case

play39:28

changes the topology.

play39:29

And are modified or genetically modified worms would

play39:33

behave in the same way and the same manner

play39:35

as an ordinary worm?

play39:39

DAVID DALRYMPLE: It's a very good question.

play39:41

And the way that I've sort of dodged that is to say,

play39:44

what I'm looking for is this sort of repertoire

play39:48

of 30 or 40 behaviors.

play39:50

And so suppose that introducing all of these foreign channels

play39:57

really does change what's going on at the level

play40:00

of neural dynamics.

play40:02

But suppose that when you look at what the worm does

play40:05

under various experimental conditions,

play40:07

it's still the same, then yes, you're

play40:12

going to be simulating something that isn't natural.

play40:14

You're going to be simulating the modified

play40:16

state with whatever dynamical changes that are introduced,

play40:20

but you're still going to be capturing the computations that

play40:24

lead to the same behavior.

play40:26

And so in some sense, if you can't tell the difference right

play40:28

away, and what you're trying to do

play40:30

is not to be able to tell the difference to your model,

play40:33

then it doesn't matter if those changes are introduced.

play40:36

But there is definitely a risk that some of the behaviors

play40:38

drop away.

play40:40

For instance, with the voltage sensor, as I said,

play40:45

it's originally a proton pump.

play40:46

But if you put a proton pump into a neuron,

play40:49

you're not going to get any spontaneous activity,

play40:51

because it'll just depolarize--

play40:53

it'll hyperpolarize all the time, well,

play40:56

all the time that that channel's being activated.

play40:58

And since it's supposed to be a passive sensor,

play41:00

that's not good.

play41:02

So it's critical that this sort of performs as

play41:05

advertised by the people who engineered it, that it doesn't

play41:08

perturb the neuron.

play41:09

But you know, if it does, you're going to know.

play41:12

It's not going to do the things that it's

play41:13

supposed-- the worm won't do the things it's supposed to do.

play41:16

Yeah?

play41:17

AUDIENCE: [INAUDIBLE] sort of focus?

play41:22

Or will it be a bit more unpredicitable than that?

play41:25

DAVID DALRYMPLE: So I think, my intuition

play41:28

from what I've observed so far, is

play41:31

that it's a collection of largely autonomous

play41:36

local control loops with some long-distance modulatory

play41:41

connections that are usually not active except

play41:44

in exceptional conditions.

play41:46

So you're going to have in each body segment just

play41:51

a tiny control loop.

play41:53

In fact, there is some evidence that the control loop consists

play41:56

of one cell that just happens to synapse onto muscle

play42:00

and also be a sensory neuron that basically says,

play42:05

if the body segment ahead of me is stretching this way,

play42:08

then about 50 milliseconds later, then I

play42:12

should stretch that way too.

play42:13

And so this sort of propagates the undulating wave.

play42:17

There isn't a pattern generator, for instance,

play42:20

like you would see in a higher organism that sort of says, OK,

play42:23

you, you, you, you.

play42:25

They sort of coordinate with each other.

play42:27

And then when something goes wrong,

play42:30

then there is other connections that

play42:33

seem to get brought into the loop

play42:36

and modulate those typically autonomous systems.

play42:41

But I don't know.

play42:43

I expect that many surprises await in the full model.

play42:49

Yeah?

play42:49

AUDIENCE: What do you think about the current efforts

play42:51

to jump directly to the connectome of say, a mouse?

play42:53

Do you think it's possible that these projects will

play42:55

succeed without insight?

play42:58

DAVID DALRYMPLE: Without?

play42:59

AUDIENCE: Insight.

play43:00

DAVID DALRYMPLE: I think that it's

play43:02

possible that they will succeed and not provide any insight,

play43:04

if that's what you mean.

play43:07

It's certainly physically possible

play43:10

if you have enough time and resources

play43:13

to do electron photomicrographs of an entire mouse brain.

play43:16

I mean, it's not that far off.

play43:18

If you had, say, $80 million, you could just do it.

play43:24

It just takes a lot of microscopes and a lot of time.

play43:27

So it's not that-- it's not actually a very high-risk

play43:30

project, in that sense.

play43:32

But it's also not really that much of a high-reward project,

play43:35

because no one knows what to do with that data.

play43:39

Like, the most advanced--

play43:40

I actually did a lab rotation with Jeff Lichtman

play43:44

who did the Connectome project.

play43:47

And what they're looking at right now

play43:50

is they're looking at, well, when axon synapse

play43:55

onto dendrites, are they choosing

play43:58

which dendrite to synapse on to randomly or not?

play44:03

And that's the sort of analysis that you can do on that data.

play44:07

And it seems pretty obvious that they're not random.

play44:13

There is some sort of pattern to how

play44:15

things connect in the brain.

play44:17

I think that's intuitively clear.

play44:19

But there is still also this community of people out there

play44:25

who say, no, you know what?

play44:27

We've done neural networks with random connections.

play44:30

They seem to be pretty clever.

play44:31

They're just as clever as the neural networks

play44:33

we've built with carefully designed connections.

play44:35

So you know, there is no real reason

play44:38

that the brain needs to have specific patterns

play44:40

of connection.

play44:40

And nature is always parsimonious and efficient,

play44:43

so it's probably random.

play44:45

And so this is the sorts of stuff

play44:48

that you see in connectomics.

play44:49

And it's not exactly what I'm interested in.

play44:57

AUDIENCE: What sorts of inputs and outputs does the worm have?

play44:59

You mentioned muscles and--

play45:01

DAVID DALRYMPLE: Yes, so the outputs that we can observe

play45:06

are pretty much all muscles.

play45:08

There are body wall muscles.

play45:09

There are egg-laying muscles.

play45:12

There are anal muscles.

play45:13

There are head muscles.

play45:16

And then the inputs, there is a few light-sensitive neurons.

play45:21

So it can do sort of--

play45:24

it's actually very much like a QAPD, which

play45:28

is the sensor that was put on top

play45:31

of early heat-seeking missiles.

play45:33

So it has just enough information

play45:35

that it can navigate toward light or away from light.

play45:40

It has a large variety of chemical sensors, or basically

play45:45

odor sensors.

play45:47

So that's how it finds food.

play45:50

It also has chemo sensors that aren't

play45:52

classified as odor sensors, like carbon dioxide sensors.

play45:56

So if there is too much carbon dioxide,

play45:59

it goes away from there.

play46:02

It has touch sensors.

play46:04

So if you poke it, it goes away.

play46:08

I'm trying to think some of the other sensors.

play46:10

It's mostly odor, definitely, by cell count.

play46:16

Odor, touch, and light, I think that's about it.

play46:22

AUDIENCE: So in the stimulation you

play46:24

intend to just stimulate the inputs

play46:25

and try to get some expected outputs?

play46:27

Or are you going to brute force all the possible neural states

play46:29

for the worm?

play46:30

DAVID DALRYMPLE: So it's 302 neurons.

play46:33

So if each neuron has, say, two states, that's a lot.

play46:40

That's a big number.

play46:41

I'm not going to do that.

play46:44

There is sort of, there is a hierarchy

play46:46

of different sorts of levels of data-collecting pain.

play46:52

And that's the top of it.

play46:55

Well actually, the top of it is that the state of each neuron

play46:57

is a real number and you have to sample a vector

play47:02

field to the precision of its curvature,

play47:07

a 302-dimensional vector field.

play47:09

And then it goes down from there to, at the very bottom,

play47:15

you just sort of saying, OK, each of the neurons

play47:18

is a linear function of the other neurons.

play47:22

And we just have to put together a 302-by-302 matrix

play47:26

of synaptic weights.

play47:29

And in fact, it's known that there

play47:30

is only about 7,000 synapses, so you can zero most of that

play47:33

out right away.

play47:34

And there is only 7,000 numbers you need.

play47:36

That's sort of hyper-optimistic point

play47:38

of view, which is kind of what I thought going into this.

play47:42

And it's probably more complicated than that.

play47:44

But I think there is at least some separability where

play47:48

you can say the synapses and global peptide diffusion

play47:54

are basically the only ways that you

play47:56

can have an impact of one neuron on another.

play48:00

And you can do some sorts of things

play48:02

from genetics just to say, well, you know,

play48:04

here are the channels that are in there.

play48:07

You aren't going to have anything

play48:08

that can't be explained in terms of a potassium current,

play48:11

and a sodium current, and a calcium current.

play48:14

And so there is definitely prior information

play48:16

that you can put into it from the connectome

play48:19

and from genetics.

play48:20

And you can even go so far--

play48:23

and people have-- as to put in prior information

play48:26

from what you would like to get out of it.

play48:29

And you say, well, I want to evolve

play48:31

the synaptic rates toward something that exhibits

play48:34

this type of undulation.

play48:36

And lo and behold, it exhibits that type of undulation,

play48:39

but then you don't know whether it correlates to reality

play48:41

or not.

play48:42

But if you have an instrument where you can check,

play48:45

does this correlate to reality, then

play48:47

you can still use those sorts of approaches as shortcuts

play48:50

through the sort of state space of possible neural networks.

play49:04

Yes?

play49:04

AUDIENCE: What's the lifecycle of a worm like?

play49:06

How long does it take to create a worm that

play49:07

has been genetically--

play49:09

DAVID DALRYMPLE: It's wonderful.

play49:10

The lifecycle is four days.

play49:15

Yeah, it's very convenient.

play49:19

And it's a self-fertilizing hermaphrodite too.

play49:21

So it clones by itself.

play49:25

It's pretty good.

play49:27

It's also the only organism that cryogenics

play49:29

has proven to work on.

play49:31

You can put worms into liquid nitrogen.

play49:33

20 years later, you can thaw them out.

play49:35

And they start crawling around.

play49:38

Yeah?

play49:38

AUDIENCE: So you said that we hope that the neurons have

play49:43

basically a binary state.

play49:44

What's to say that they don't have

play49:46

10 binary states, or something?

play49:48

DAVID DALRYMPLE: Yeah, no, I don't think

play49:49

they do have a binary state.

play49:54

Again, in some ways, it's not it's not the right way

play49:57

to think about the problem, that the neuron has a state,

play50:01

because it seems like a lot of what's going on

play50:05

is, especially in C. elegans, because we're essentially

play50:07

talking about analog computation because there aren't known

play50:10

to be spontaneous action potentials, what you're really

play50:14

looking is more like a control system.

play50:21

Like for instance, there was a paper just

play50:23

published this month that showed that certain networks in C.

play50:26

elegans are there to compute time derivatives.

play50:29

And I believe that that's just a functional block of a PID

play50:36

controller.

play50:37

Then we're going to find the integral part and then

play50:40

something that linearly sums them and then feeds back.

play50:44

So I think it's not really so much

play50:48

about finding the set of states and then saying,

play50:51

OK, for this state, you go to there,

play50:54

in the sense of sort of an if-do rule, because I think that it's

play50:58

a lot simpler than that, in that there are just

play51:03

simple equations that govern most of the processes

play51:06

and how they relate to each other and to the environment.

play51:09

That's my hope.

play51:16

AUDIENCE: How big is the worm?

play51:18

DAVID DALRYMPLE: The adults can grow

play51:20

to be about 700 or 800 microns long and about the diameter

play51:26

of a hair, about 50 to 100 microns in diameter,

play51:29

depending on stage.

play51:32

And they're transparent.

play51:33

So they're challenging to spot with the naked eye,

play51:36

but you can in the right lighting conditions.

play51:40

Yeah?

play51:41

AUDIENCE: I remember in the first days Brenner actually

play51:47

bred small ones.

play51:50

And for example, I think the nervous system

play51:55

and because are separate, pretty much,

play52:00

and with selective breeding, he found

play52:02

some where they were in the same plane and half the length,

play52:08

and so forth.

play52:08

And the purpose of all of that was

play52:11

so that they could get a whole worm into the target

play52:16

of his electron microscope.

play52:20

That we he had similar pictures of the whole thing.

play52:22

DAVID DALRYMPLE: I see.

play52:23

AUDIENCE: And it sounds like they

play52:25

don't have to do that anymore, but it

play52:27

was an interesting case where they controlled

play52:30

the evolution of this beast to--

play52:32

DAVID DALRYMPLE: To be easier to study.

play52:34

AUDIENCE: And so it's not a bad idea.

play52:36

DAVID DALRYMPLE: No, it's not.

play52:39

AUDIENCE: I don't know if they reduced the number of neurons

play52:42

by accident.

play52:48

DAVID DALRYMPLE: Yeah?

play52:49

AUDIENCE: What percentage of this effort

play52:50

do you think is going to be devoted to combining

play52:53

all of these tools?

play52:54

And what percentage do you think is

play52:56

going to be running the aggregate to learn something?

play53:00

DAVID DALRYMPLE: I think once everything works, the process

play53:04

of taking a worm and sort of scanning it in will only

play53:10

take a couple of hours, so 99.99%

play53:16

building the tools, because really,

play53:19

that's about how long you have before these sorts of--

play53:25

the dyes get photobleached and the laser power

play53:28

starts to heat things up uncomfortably for the worm.

play53:34

And then you're studying a different system.

play53:44

Yeah?

play53:45

AUDIENCE: Who funds this research?

play53:48

DAVID DALRYMPLE: Larry Page.

play54:01

AUDIENCE: Personally or--

play54:02

DAVID DALRYMPLE: Yes.

play54:09

Yeah, the NIH isn't a big fan, although there

play54:13

was a faculty member who put in a valiant application.

play54:17

He managed to tie this research in plausible ways

play54:21

to a cure for Parkinson's, which I was really impressed by,

play54:24

but the NIH was not so impressed.

play54:34

AUDIENCE: Did you guys apply to a grant from Larry?

play54:37

Or did he-- did someone talk to him?

play54:39

DAVID DALRYMPLE: It was his idea, actually.

play54:41

AUDIENCE: Oh.

play54:42

DAVID DALRYMPLE: Well, I mean, he wasn't the first person

play54:44

to have the idea.

play54:45

And in fact, he wasn't even the first person

play54:47

to tell me about it.

play54:48

He was the third person to tell me about it.

play54:50

But when he told me about it, I listened.

play54:55

And then when I decided that I was going to do it,

play54:57

I said, well--

play54:59

not to him, obviously.

play55:00

I went through my network of many people

play55:05

and eventually got a message through that

play55:08

I wanted to do this for real.

play55:11

And you know, could he spare a 0.001% of his fortune

play55:14

to make it happen?

play55:16

And yes.

play55:22

AUDIENCE: So how far along are you?

play55:23

How many worms do you have?

play55:32

DAVID DALRYMPLE: It's not anyone's number-one priority

play55:34

to make these genes, again, because all of the people who

play55:38

have the skills to make the genes

play55:41

and breed the worms to express them

play55:45

are naturally biologists who have this bias towards, no, no,

play55:51

we want to do small systems.

play55:53

And so I have to figure out how to do it myself.

play55:56

And not having any background in biology, most of my time

play55:59

lately has been spent taking classes in biology.

play56:03

But I've also been working on some of the computational tools

play56:06

that we'll need.

play56:07

For instance, when you have a worm

play56:10

in this sort of wobbly conformation,

play56:16

and it's changing, obviously, as it behaves,

play56:19

you need to find the neurons and track them

play56:23

at rates of at least 100 hertz in order

play56:27

to keep your lasers pointed in the right place.

play56:30

And so I've been working on that.

play56:31

And I have some pretty good algorithms for that,

play56:33

as well as isolating the signals.

play56:37

And basically, when you're imaging something

play56:42

in confocal mode, which is going to be the first thing that we

play56:45

do, because it's a lot cheaper than two-photon,

play56:50

is you have two separate neurons that are on top of each other.

play56:53

They have different signals.

play56:55

And the way that I'm doing it is to say each pixel

play57:00

is some roughly linear function of some set

play57:05

of the neurons that are near that pixel in [? 3D ?] space.

play57:08

And you can actually just use a singular-value decomposition

play57:13

followed by some simple heuristics derived

play57:17

from Microsoft Paint to say, here is where-- you just fit.

play57:21

Literally, you flood fill in where the neurons are.

play57:24

And it works incredibly well.

play57:33

Yeah?

play57:34

AUDIENCE: So do you have a multiple state process

play57:38

or something, like not just that you

play57:41

wanted to look at in the worm, like something,

play57:43

like if you wanted to look at all these neurons

play57:45

as it's producing a new clone or something,

play57:50

like that multiple stuff, would you be able to?

play57:53

DAVID DALRYMPLE: Sorry, what do you mean by multiple states?

play57:55

AUDIENCE: Yeah, like instead of a stimulus and response

play57:59

sort of thing, if you wanted to measure something that,

play58:02

like, a process that took longer?

play58:04

DAVID DALRYMPLE: Oh, yeah.

play58:06

So the nice thing about having control over all of the neurons

play58:13

is that you can also control the sensory neurons,

play58:17

and thereby put the worm into the Matrix.

play58:21

And you can make it experience whatever you want,

play58:24

in principle.

play58:25

This has been done with zebrafish.

play58:28

So the way that you do this is you

play58:31

use a myotoxin to prevent any of the muscles from contracting.

play58:35

And so then you have a perfectly still, paralyzed animal.

play58:40

And then you can feed it whatever sensory stimuli

play58:44

you want.

play58:44

You can read out the motor stimuli, or motor responses,

play58:48

you can feed that back into your simulation.

play58:52

It's exactly like The Matrix.

play58:53

And then the animal is convinced that it's

play58:56

in this environment that doesn't exist.

play59:04

Yeah?

play59:05

AUDIENCE: So one thing that I don't perhaps

play59:07

understand with this [INAUDIBLE] but what

play59:10

I'm trying to understand is how much of the state

play59:13

would the worm have?

play59:16

So it's 300-odd neurons, how much of the state

play59:21

do you have knowledge of at any particular time?

play59:24

Is it all of those neurons?

play59:27

DAVID DALRYMPLE: So again, it is.

play59:30

You can measure-- the way that it

play59:33

works, if you have one pair of lasers,

play59:36

you can measure any neuron at any time.

play59:38

And the measurement process takes about a few milliseconds.

play59:44

And the time constant of these neurons

play59:48

is on the order of 40 or 50 milliseconds.

play59:51

So you have to be a little bit smart about which neurons

play59:55

you want to look at when.

play59:57

So in that sense, you can't look at all of them

play60:00

simultaneously unless you have more lasers.

play60:03

Again, more lasers solve everything.

play60:05

But yeah, so you can look at calcium.

play60:09

You can look at voltage.

play60:11

And you can't do those at the same time either.

play60:13

You have to say, OK, I'm going to read the voltage of this.

play60:15

I'm going to read the calcium of that.

play60:19

And there is actually a theory called

play60:23

optimal exploration of dynamic environments which was also

play60:26

just published this year about a way

play60:29

to algorithmically make that decision of what thing

play60:34

do I want to look at that is most likely to lead

play60:36

in the long term to gaining the most

play60:38

information about the system, given my expectations

play60:41

of what it might look like?

play60:43

AUDIENCE: And then conversely, how much of the state

play60:47

can you [INAUDIBLE]?

play60:49

DAVID DALRYMPLE: And it's the same.

play60:51

You can perturb, either stimulate or inhibit any neuron

play60:57

at any time to any intensity.

play60:59

So you have a little bit more control there.

play61:02

But again, you do have to multiplex.

play61:05

And you have to take advantage of the fact

play61:07

that the neurons are not going to react as quickly as you can.

play61:11

AUDIENCE: And then for each sum, how much of the state

play61:14

do you think you can support?

play61:18

DAVID DALRYMPLE: You're basically

play61:20

just looking at two numbers, but you're looking at--

play61:23

you can look at those numbers anywhere in the cell.

play61:27

So you don't have to assume that the cell isopotential,

play61:30

although it probably is for most of the neurons--

play61:34

not for the ones that run the whole length of the worm,

play61:36

but there aren't as many of those.

play61:40

So you can you can collect, in some sense,

play61:44

as many numbers as you want, if you're

play61:46

interested in looking at all of the--

play61:48

gritting a particular neuron to a lot of different locations

play61:51

and you don't care about looking at anything else.

play61:54

But you are only going to be looking at calcium or voltage,

play61:59

at least with the proposal that I've got here.

play62:02

There are other genetically encoded sensors.

play62:05

But as far as I know, none of them

play62:07

are likely to be nearly as relevant to neural activity.

play62:10

At the same time, again, you're not

play62:11

getting what's going on in the synapses.

play62:13

You're not getting phosphorylation.

play62:15

You're not getting methylation.

play62:17

And all of those things are certainly

play62:19

important for learning and plasticity.

play62:21

But since C. elegans doesn't have that much of it,

play62:24

it might be OK.

play62:25

AUDIENCE: So you haven't got a complete state transition

play62:27

diagram for the dynamics of it?

play62:29

DAVID DALRYMPLE: Right.

play62:30

Yeah, like I said, a complete transition diagram

play62:33

would take years to construct.

play62:36

And the life cycle of four days, it's not going to work.

play62:43

Yeah?

play62:44

AUDIENCE: I just wondered about your thoughts

play62:46

about higher-level questions.

play62:53

For example, if you look at a neurology book,

play62:59

it will explain on the basis of what happens when people get

play63:03

a concussion that short-term memories are stored

play63:10

in hippocampus or amygdala--

play63:13

I forget.

play63:18

They can last there for 20 minutes or so.

play63:22

And then over the next day, a memory trace

play63:26

is copied into some other part of the brain,

play63:30

the parietal lobe, the frontal lobe, or something.

play63:34

And I've never seen even a paragraph about, well,

play63:41

how does it figure out where to put a memory?

play63:43

And how are memories represented?

play63:46

So a nice question would be-- if you take something like

play63:49

the idea of [? k lines, ?] the kind of methods

play63:53

you're describing might be nice for that,

play63:56

because the usefulness of the [? k line ?] might be that

play64:01

it's a bunch of neurons which go several centimeters.

play64:06

And so if you could look at neurons

play64:12

in 100 places a centimeter apart,

play64:17

which is very low resolution, then you

play64:20

might be able to find evidence for correlated activities

play64:25

related to some stimulus or whatever.

play64:29

So some of these techniques might

play64:31

work on a much larger brain, just because increasing size

play64:39

is liberating.

play64:41

It means that the interpretation can be clumsier

play64:44

if it's looking at a whole bundles of fibers.

play64:47

DAVID DALRYMPLE: Right.

play64:48

Yeah, that's what Ed Boyden is starting to look at now.

play64:52

Basically, I think he's calling it optodes.

play64:55

I'm not sure if he coined that phrase, but--

play64:57

the optical equivalent of sticking an electrode

play65:00

into a brain is--

play65:01

AUDIENCE: Is he using humans or?

play65:04

DAVID DALRYMPLE: Well, he's doing--

play65:06

personally, he's doing mice.

play65:07

And he's working with people who do monkeys.

play65:10

The problem with humans is--

play65:12

AUDIENCE: [INAUDIBLE]

play65:14

DAVID DALRYMPLE: Yes.

play65:16

Yeah, humans have a thick skull.

play65:19

That's part of the issue.

play65:20

But the bigger issue is that the only reason

play65:23

that we get to stick electrodes in humans at all

play65:25

is because it's an approved treatment for epilepsy.

play65:29

Now, he's working on getting optodes approved

play65:31

as a treatment for, well, first for blindness, which

play65:37

is kind of an obvious thing.

play65:38

If you can use an adenovirus to express opsins in an eye that

play65:42

doesn't have them, that's--

play65:44

and it probably doesn't affect that many people,

play65:47

because most blindness is caused by other things.

play65:49

But for those that do, it's a very obvious intervention.

play65:52

But then also PTSD and other sorts of diseases-- so as soon

play65:56

as it gets approved to treat any sort of disease,

play65:58

then you can piggyback on that to do human research.

play66:00

But that hasn't happened yet.

play66:02

AUDIENCE: In the early '60s, there

play66:05

were some successful experiments.

play66:09

There is a guy named [? Brindley ?] who--

play66:13

I'm not sure what his profession was.

play66:15

But he made some of the first electrodes.

play66:25

And he actually got permission from his secretary,

play66:29

who was blind, to put a little plate with 64 electrodes

play66:36

on her occipital [INAUDIBLE].

play66:39

DAVID DALRYMPLE: Wow.

play66:40

AUDIENCE: --and I put a little currents in.

play66:45

And she could recognize visual patterns.

play66:49

And each of these electrodes he described

play66:52

as being a little bar which was about half a toothpick

play66:56

and arm's length.

play66:58

And of the 64 electrodes, about 30 of them

play67:02

actually produced these.

play67:03

And the others didn't work.

play67:06

And so he did that.

play67:10

And then he removed that, because nobody--

play67:16

it was a pretty risky thing to do anyway.

play67:21

And at the time, we had a great neuroscientist here

play67:24

named Warren [INAUDIBLE].

play67:27

And he got [? Brindley ?] to come over and talk to us.

play67:31

Incidentally, [? Brindley ?] later

play67:33

discovered the use of nitrous oxide

play67:39

for producing erections in human males.

play67:43

And he gave a demonstration of that in a famous lecture.

play67:48

[LAUGHTER]

play67:52

At the time, the colleague said, if you're

play67:54

interested in stimulating vision in the human brain,

play67:59

you better do it in the next five years

play68:01

or it will be illegal.

play68:05

That was in the early 1960s.

play68:08

And this is the same time that the worms that Brenner was--

play68:17

and we actually thought about that and decided not to.

play68:22

But anyway, it would be nice if we could get back to that.

play68:27

And it might be that low-resolution things

play68:31

distributed very widely would also

play68:34

give a lot of new information.

play68:37

DAVID DALRYMPLE: Yeah, I think there

play68:38

is a lot of promise in MEG, especially because when you're

play68:44

just sort of putting things on the surface of the head,

play68:47

there aren't issues, because it's not surgery,

play68:50

even if you're performing the same effective perturbation

play68:54

to the neurons.

play68:56

And what people are doing with transcranial magnetic

play68:59

stimulation is not that great, but it's certainly promising.

play69:07

AUDIENCE: And there might be some way of getting things

play69:14

into cells that actually synthesize proteins,

play69:18

encoding data, and could come out in the bloodstream later.

play69:22

DAVID DALRYMPLE: Yeah, although there

play69:24

is a group working on that.

play69:25

They're calling it the molecular ticker

play69:27

tape, where you basically--

play69:31

it's complicated.

play69:33

I don't think I can explain it properly.

play69:37

There are a lot of people who are

play69:38

looking for problems to match the solution

play69:43

of high-throughput sequencing, where

play69:46

you can take huge amounts of DNA and sequence it cheaply now.

play69:49

And that's one of those.

play69:54

But it would take pretty heroic effort

play69:56

to then correlate those with the actual experiment

play70:01

that you performed after you've extracted them.

play70:03

AUDIENCE: They have to say where they came from

play70:05

and how long they took.

play70:06

DAVID DALRYMPLE: Right.

play70:07

And I asked them how do you identify the--

play70:12

barcode the cell.

play70:13

And they were like, well, we don't know.

play70:15

We'll figure that out eventually.

play70:17

AUDIENCE: Danny Ellis and I once consulted

play70:19

for Schlumberger who make instrumentation for oil wells.

play70:24

And when they have a deep oil well,

play70:27

there is a pipe that's a couple of miles

play70:29

long, believe it or not.

play70:31

And they get one bit out of every 5 or 10 seconds

play70:36

by putting pressure [INAUDIBLE].

play70:38

And we designed hideously elaborate things

play70:43

that would punch tape and then would

play70:44

come floating up a few days later.

play70:51

It was fun going to the meetings where the geologists explained

play70:55

why each of them were--

play71:06

DAVID DALRYMPLE: Any other questions?

play71:07

They don't have to be about neuroscience.

play71:10

AUDIENCE: Thank you very much.

play71:12

[INAUDIBLE]

play71:17

AUDIENCE: You focused on the experimental side reading out

play71:20

from the worm.

play71:21

At some point, you want all of that data

play71:23

to drive a computational model.

play71:26

Can you build a computational model now

play71:28

and initialize it in various ways

play71:31

and look for any kind of behavior at all?

play71:33

Or do you absolutely need this biological input

play71:36

to begin to drive a computational system?

play71:39

DAVID DALRYMPLE: I think--

play71:41

I am working on some leads for building computational models.

play71:46

The obvious things to do have already sort of

play71:49

been done in that department.

play71:51

Like I said, they're using genetic algorithms

play71:53

to find parameters that satisfy certain conditions.

play71:58

And it doesn't seem that enlightening,

play72:01

so I haven't pursued it too much.

play72:03

But I'm looking at--

play72:05

and this is actually another, actually,

play72:08

really interesting to me connection

play72:11

in terms of AI and neuroscience, is

play72:13

that when I'm looking at building

play72:15

the computational model that tries to interpret this data,

play72:18

the key idea that keeps coming back

play72:21

is critics and selectors, because you need to have

play72:24

some set of possibilities.

play72:27

And you need to have some heuristics for determining,

play72:29

based on what data is streaming in, what sort of model

play72:34

seems to apply to it.

play72:35

And then you need to have a meta model where

play72:37

you need to build layers of reflection,

play72:39

where you're saying, well, we need

play72:41

to modify this in this way.

play72:44

And it's not easy to implement that from scratch.

play72:49

So I'm thinking about it.

play72:50

But I think that once we have real data,

play72:55

it'll be more clear what needs to be done to simulate

play72:58

the processes underlying it.

play73:00

And again, it's just sort of my hope

play73:03

that, when observing something that has not

play73:05

been observed before, that some insight will come out

play73:07

of that process.

play73:10

AUDIENCE: Is there a behavioral diagram somewhere?

play73:14

For example, if you feed it a lot of food,

play73:17

presumably it will stop eating.

play73:19

DAVID DALRYMPLE: Right.

play73:21

Yeah, no, there--

play73:23

I don't remember the name of the fellow you mentioned who

play73:26

spent years studying seagulls.

play73:28

AUDIENCE: Tinbergen.

play73:29

DAVID DALRYMPLE: Tim Bergen, right--

play73:30

there isn't a Tinbergen of worms, unfortunately.

play73:35

All of the studies are--

play73:37

again, it's this very stamp-collecty approach saying,

play73:42

OK, at 24.6 degrees Celsius, with OP20 growth media,

play73:50

and with worms that are three days into their life,

play73:55

and this number of worms per this area of plate,

play73:59

with this brand of agar, here are the behaviors

play74:03

that we see in response to this stimulus

play74:07

with this number of milliseconds between repetitions, and so on.

play74:11

And no one goes so far as to make

play74:15

a plot with more than one variable,

play74:17

because goodness, what's your control?

play74:20

And it is a little bit frustrating

play74:22

that we're going to have to build those sorts of things

play74:24

ourselves.

play74:25

But hopefully that will be the less hard part

play74:29

than building the model.

play74:33

Yeah?

play74:35

AUDIENCE: Could you just make some thoughts about the way

play74:39

mathematics might relate to--

play74:41

I mean, you said you were interested in that subject.

play74:43

So I just--

play74:44

DAVID DALRYMPLE: Sure, might relate to?

play74:46

AUDIENCE: Well, some of the theories of simple animals.

play74:52

DAVID DALRYMPLE: As I said, I think

play74:53

there is something missing in mathematics.

play74:56

There is some sort of theory that

play74:59

follows from some sort of symmetry that

play75:01

shows up in nervous systems and not in many other places,

play75:06

and maybe shows up in societies as well,

play75:08

but I'm not certain of that.

play75:11

But as far as math that we know, certainly

play75:16

nonlinear dynamical systems is the obvious one,

play75:20

because a neuron is a nonlinear dynamical system.

play75:23

And I think that in simple animals

play75:25

and especially in C. elegans, most

play75:28

of the computations that we see are also nonlinear dynamical

play75:32

systems.

play75:33

As I said, they're integrators or they're derivators.

play75:39

They're things of that nature.

play75:42

And I think a lot of them will turn out

play75:44

to be amenable to analysis of certain differential equation

play75:49

systems.

play75:50

But at the same time, I think figuring out

play75:53

certainly more complex organisms will

play75:55

require a new way of thinking about how to put together

play75:59

the computation.

play76:01

Von Neumann in 1956 started working on a monograph

play76:08

to accompany a series of lectures called

play76:10

"The Computer and the Brain," in which

play76:12

he would discuss the differences and similarities,

play76:14

and how he thinks the ideas from computer science

play76:17

will relate to neuroscience.

play76:19

And this was 1956.

play76:23

And unfortunately, he got bone cancer.

play76:25

And he died in 1957, and left the document unfinished,

play76:30

and never gave the lectures.

play76:31

And the document concludes very dramatically

play76:35

with this sentence, "However, if the brain

play76:39

uses any sort of mathematics, the language

play76:42

of that mathematics must certainly

play76:44

be different from that which we explicitly unconsciously refer

play76:47

to by that name today."

play76:51

And that's where it ends.

play76:58

Yeah?

play76:59

AUDIENCE: Do you have an idol?

play77:00

DAVID DALRYMPLE: What?

play77:01

AUDIENCE: Do you have an idol, like, fictional or non-fiction?

play77:04

DAVID DALRYMPLE: Von Neumann would be the closest, yeah.

play77:09

I mean, Iron Man, but that's just obvious.

play77:22

AUDIENCE: So you're over Edison?

play77:23

DAVID DALRYMPLE: What?

play77:24

AUDIENCE: You're over Edison?

play77:25

DAVID DALRYMPLE: Over Edison--

play77:27

no, Edison is pretty cool too.

play77:31

AUDIENCE: Von Neumann was my hero,

play77:33

because when I finished my thesis in that department,

play77:35

it was on neural networks.

play77:39

And the Math Department didn't know what to make of it.

play77:43

So then came Von Neumann and--

play77:46

did I tell you this story?

play77:47

DAVID DALRYMPLE: No.

play77:48

AUDIENCE: They said, is this mathematics?

play77:51

And he said, if it isn't now, it soon will be.

play77:57

And I got my PhD.

play78:07

AUDIENCE: Can you talk about any of the projects

play78:10

before yours that people have already [INAUDIBLE]??

play78:14

DAVID DALRYMPLE: Yeah, so let's see.

play78:19

In terms of optogenetics, I don't know.

play78:26

I think probably the most famous one

play78:28

just because it has a really cool movie

play78:30

is that someone found a promoter for a class of neurons

play78:34

in mice that is coupled to right turns.

play78:38

And so you can put the mouse in some environment.

play78:41

You know, it behaves.

play78:42

You turn on the light.

play78:43

And it starts turning to the right

play78:44

no matter what it's doing.

play78:45

It just turns around and starts going in circles.

play78:48

And you turn off the light and it

play78:49

goes back to what it was doing.

play78:52

It's mostly cool stuff like that.

play78:56

But it's also found use as a replacement

play79:00

for electrophysiology.

play79:01

You know, anything that you could do by using micropipettes

play79:06

or micro electrodes, well, to some extent,

play79:10

depending on what kind of time resolution you need

play79:12

or what kind of manipulations you want to perform,

play79:15

but a lot of the things that you previously

play79:17

would need very precise and expensive equipment

play79:20

and calibration for, you can now do much more simply

play79:24

by using genetics and a blue LED.

play79:28

So there is a lot of things like, for instance, even

play79:31

in C. elegans, a lot of work has been

play79:33

done by Cori Bargmann in recent years

play79:37

using calcium imaging to just sort of explore

play79:40

more quickly and more thoroughly the dynamics

play79:46

of sensory neurons, which is important to me,

play79:48

because if I'm going to simulate the sensory neurons

play79:51

in some pattern that reflects a virtual reality, then

play79:55

I need to know how that pattern would relate

play79:57

to the real reality that they're observing.

play80:00

And so Cori Bargmann has done a lot of experiments

play80:03

where she just sort of flows in an odorant, for instance.

play80:06

And she's got a laser pointed at the neuron that's

play80:10

known to sense that odorant, and just

play80:12

characterizing the dynamics of how the neuron responds

play80:15

and habituates, which is something that you

play80:18

could do with a micropipette.

play80:20

But the worm is so small that you need--

play80:24

it's just really hard to get something in there

play80:26

onto the specific neuron that you want and have it stick.

play80:30

And you know, it's been done, but it's only

play80:31

been done a few times by people who are very skilled and very

play80:35

lucky.

play80:36

And these types of optical tools make it a lot easier

play80:39

to do those sorts of experiments, if nothing else.

play80:44

AUDIENCE: This is a dumb question, I think.

play80:46

So you can't change anything about the worm

play80:49

once it's already in there?

play80:50

Like, once it's already under the lasers,

play80:53

you can't add more stuff as-- but why would you want to?

play80:55

DAVID DALRYMPLE: Well, I mean, there

play80:57

are reasons that you might want to.

play80:59

A lot of experiments are done-- it's actually

play81:01

really creative what a lot of cellular neuroscientists do,

play81:04

because they realize that once you have something stuck

play81:08

into the cell body of a neuron, you can introduce whatever

play81:13

kinds of molecules you want.

play81:15

And there are certain molecules that

play81:18

are selective blockers of certain channels.

play81:22

And so you can do things like, OK, what if you don't

play81:25

have any potassium channels?

play81:26

What now?

play81:27

And you can do those sorts of perturbations

play81:30

when you have a physical connection to the cytosol,

play81:35

which you can't do optically.

play81:40

So those, again, are things that I hope to show,

play81:44

but it is not known that you don't

play81:47

need to do them in order to model behavior

play81:50

at the scale of the organism.

play81:53

Yeah?

play81:53

AUDIENCE: [INAUDIBLE] like a chemical

play81:57

triggers that are controlled by physical [INAUDIBLE]??

play81:59

Like in humans, there are different glands.

play82:01

DAVID DALRYMPLE: Yeah, so there are there

play82:03

aren't really glands in C. elegans.

play82:08

There isn't a circulatory system either,

play82:10

but there is sort of like a shared body of fluid

play82:14

through which waste is channeled,

play82:19

that it contacts a lot of cells.

play82:21

And there is some evidence that there are a few neurons that

play82:24

do diffuse transmitter into that, thus affecting

play82:29

a sort of global change in excitability, so yes.

play82:34

The way that that manifests in a model

play82:36

is basically just as an extra node saying this

play82:39

represents the sort of global concentration

play82:41

of glutamate or whatever.

play82:57

AUDIENCE: Thank you.

play82:58

DAVID DALRYMPLE: Thank you.

play83:00

MARVIN MINSKY: What's the simplest animal that

play83:02

does a little bit of learning?

play83:06

DAVID DALRYMPLE: C. elegans its does a little bit.

play83:08

I mean, it's really just associative learning.

play83:11

So it can learn aversions or attractions

play83:13

to temperature or to chemical stimuli [INAUDIBLE]..

play83:19

But it's probably just one synapse

play83:23

that represents an aversion to this thing

play83:25

and that synapse changes in strength or something

play83:28

like that.

play83:31

As far as I know-- and I'm not a zoologist,

play83:35

so I really don't know.

play83:38

But as far as the set of classic neuroscientific model

play83:44

organisms, I think zebrafish are the simplest that

play83:46

show sort of abstract, anything resembling abstract learning.

play83:51

MARVIN MINSKY: I just realized that I think probably we

play83:55

all know something about our ancestry, that is,

play83:58

the sort of 100 million years of being

play84:06

bacteria and things like that.

play84:10

And if you go backwards, there have been mammals for about 100

play84:16

million years, I think.

play84:23

There is 100 million years of fish,

play84:25

and 100 million years of amphibians,

play84:28

and 100 million years of--

play84:32

those are all vertebrates--

play84:35

of mammals, of reptiles, and so forth.

play84:44

And I don't know what happens in the early period,

play84:49

except that we're descended from yeast somehow.

play84:54

And so it'd be nice to know, what are the first few steps

play85:02

up to the worm.

play85:03

And where did it branch?

play85:05

And are we in that lineage?

play85:08

Or did that lead off to the Coelenterates and other things

play85:13

that we don't have any horizontal relation?

play85:17

Anybody know?

play85:18

DAVID DALRYMPLE: I'm pretty sure that we're not

play85:20

descended from C. elegans.

play85:22

I think that it's a separate branch from vertebrates.

play85:27

MARVIN MINSKY: But there must have been

play85:29

something like a paramecium.

play85:30

DAVID DALRYMPLE: Right.

play85:31

So there is actually really interesting work

play85:35

in looking at yeast that flock.

play85:40

In the typical yeast, in sort of wildtype yeast,

play85:45

when it reproduces the daughter cell, it sort of diffuses away.

play85:53

You can get yeast to adhere to itself.

play85:57

So as it reproduces, it forms these globs.

play86:01

And it's actually been shown that,

play86:02

under certain environmental conditions that are not

play86:04

that implausible, that those globs are

play86:08

more Darwinian fitness than individuals.

play86:13

And so there is a hypothesis that that's

play86:15

how yeast started to become multicellular.

play86:19

MARVIN MINSKY: So they must have some extra genes that

play86:21

are not activated normally.

play86:31

I wonder why it isn't taught in grade school?

play86:34

Is that because evolution is not allowed in--

play86:41

but I came from New York.

play86:44

There weren't many anti-evolutionists yet.

play86:49

Maybe there were.

play87:00

Well, I'm impressed.

play87:01

I think that sounds like a very exciting adventure.

play87:07

DAVID DALRYMPLE: Thanks.

play87:13

MARVIN MINSKY: Do any of you have a plan

play87:15

to pursue AI or psychology?

play87:27

Who has a career plan?

play87:33

AUDIENCE: Were your plans [INAUDIBLE]??

play87:37

MARVIN MINSKY: I don't remember ever having one.

play87:41

There was just something exciting to do next week.

play88:07

Anyone have a criticism?

play88:10

Should David actually do this?

play88:18

AUDIENCE: Well, biology is slow.

play88:20

And computers are fast.

play88:22

And they get faster.

play88:23

I think that if someone were to put as much work

play88:25

as David is putting into the biological aspect of this

play88:29

to try to brute force model the work faster than he could

play88:33

and try to match it up with the behaviors,

play88:35

they would match up those 30 behaviors faster than he would.

play88:39

DAVID DALRYMPLE: I would love the competition,

play88:43

but I would just like to point out that lasers are also fast.

play88:58

MARVIN MINSKY: Well, there is something

play88:59

wonderful about an animal that can reproduce

play89:03

in four days, because you could actually,

play89:12

as part of your four-years plan, you could actually

play89:15

plan to breed some that have some particular new

play89:19

neurological behavior on the side.

play89:24

AUDIENCE: Is there any kind of social behavior that's

play89:28

documented [INAUDIBLE]?

play89:30

Or is there a small animal with social behaviors that you

play89:35

could study in your system?

play89:37

Like interaction between--

play89:39

DAVID DALRYMPLE: Yeah, I know what you mean.

play89:41

And C. elegans doesn't.

play89:44

Well, actually, it's kind of funny,

play89:46

because the hermaphrodite doesn't

play89:48

have any social behavior, but the male does,

play89:53

for obvious reasons.

play89:55

There is, in fact, 70 extra neurons

play89:58

in the male for the purposes of finding a hermaphrodite

play90:01

to impregnate.

play90:03

But I don't know--

play90:05

again, not being a zoologist, I walked onto this

play90:09

because it's well-studied.

play90:10

And it's the very simplest.

play90:12

But questions in the form of what's

play90:14

the simplest under constraint x, I'm

play90:17

less well-equipped to answer.

play90:19

As far as I know, the best thing in that domain would be ants,

play90:24

but I'm sure there is something simpler than ants

play90:26

that exhibit social behavior.

play90:29

I just don't know what.

play90:34

MARVIN MINSKY: Well, isn't there some yeast that forms--

play90:40

in some stage, it actually forms a sort of tower and stands up?

play90:48

I think it's yeast.

play90:49

DAVID DALRYMPLE: Well, yeast don't have neurons,

play90:51

so these sorts of techniques won't apply there.

play90:54

AUDIENCE: Slime mold, perhaps?

play90:57

MARVIN MINSKY: Yeah, maybe that's--

play90:58

DAVID DALRYMPLE: [INAUDIBLE].

play91:01

Yeah, slime molds are interesting,

play91:03

because they're sort of all neuron, in a way.

play91:07

They're just clumps of clumps of cells that happen

play91:11

to have electrical activity.

play91:13

There might be something interesting there.

play91:15

I don't know if anyone's tried to express optogenetic channels

play91:19

in slime molds.

play91:22

MARVIN MINSKY: They're very small.

play91:24

DAVID DALRYMPLE: They are very small,

play91:26

but you could do it with a virus maybe.

play91:47

MARVIN MINSKY: Well what about at a higher level?

play91:50

How would you find something like [? k lines? ?]

play91:54

DAVID DALRYMPLE: I think that you

play91:55

need something, some way of seeing a lot more things

play92:02

in a lot bigger brain.

play92:06

The one thing that comes to mind as sort of a new technique that

play92:09

might turn up [? k lines ?] is diffusion tensor imaging,

play92:15

which is a way of using MRI to find

play92:19

a quite detailed structure.

play92:22

And I don't know the physics of it,

play92:24

but it involves following water molecules as they diffuse,

play92:28

or tracking their flow.

play92:29

MARVIN MINSKY: Oh, so if something is more active,

play92:31

the diffusion is faster.

play92:33

DAVID DALRYMPLE: No, it's not functional.

play92:34

It's structural.

play92:35

It's that if there is a bundle of neurons,

play92:39

then the diffusion will be highly anisotropic.

play92:43

And you can measure the anisotropy.

play92:46

And then you can use some tensor math

play92:48

to turn that into what's called a tractogram.

play92:50

MARVIN MINSKY: But you're measuring

play92:52

the heat flow or something?

play92:54

DAVID DALRYMPLE: I mean, it's magnetic.

play92:56

It's MRI.

play92:56

It's magnetic resonance imaging.

play92:58

So it has some sort of tomographic component.

play93:02

And again, I don't know the physics of it.

play93:04

I just, in fact, heard of in a few weeks ago.

play93:07

It's a new technique.

play93:09

But it's been used to create some maps of the--

play93:15

particularly I think in monkeys it's

play93:17

being used a lot to create maps of at least

play93:20

the long-range connections between a whole lot

play93:24

of different areas.

play93:27

But even those are--

play93:30

it really can only resolve thick bundles of neurons.

play93:34

And [? k lines ?] probably aren't, but maybe they

play93:39

are [INAUDIBLE].

play93:45

MARVIN MINSKY: I wonder how thin the skull of a parrot is.

play93:52

I just mention it because it might be the smartest

play93:56

animal per gram or something.

play94:01

Well, how smart are mice?

play94:04

DAVID DALRYMPLE: Mice are pretty smart.

play94:07

AUDIENCE: Octopodes are supposed to be smarter too.

play94:09

And I don't how much they weigh.

play94:10

MARVIN MINSKY: Which thing?

play94:11

AUDIENCE: Octopodes.

play94:12

DAVID DALRYMPLE: I think they're pretty heavy.

play94:13

AUDIENCE: Octopodes are heavy?

play94:14

That makes sense.

play94:15

They're in water, so they're dense.

play94:22

DAVID DALRYMPLE: I think you'd probably

play94:25

find ants to be the smartest per gram,

play94:27

but again, that's just my guess based on what

play94:29

little I know of animals.

play94:31

AUDIENCE: Parrots are always trying to optimize for weight.

play94:33

And ants aren't.

play94:38

MARVIN MINSKY: Ants are pretty small.

play94:42

Well, they're very variable.

play94:52

I think Ed Wilson had a 26-year-old ant.

play94:55

DAVID DALRYMPLE: Wow.

play94:59

AUDIENCE: It would be glorious to see a large group of ants

play95:02

acting as a parrot.

play95:29

AUDIENCE: So I just have a quick question for David.

play95:31

You said, well, there is an experiment that

play95:34

sort of showed that you can control mice

play95:36

by shining lights on them.

play95:37

Do you think that there is any fear of the possibility

play95:42

that someone could create a virus that affects all humans

play95:45

and then it controls humans to do

play95:47

more than just turn right by shining lasers

play95:49

onto them from space?

play95:50

DAVID DALRYMPLE: So [INAUDIBLE]

play95:52

AUDIENCE: Fear or hope?

play95:54

DAVID DALRYMPLE: What?

play95:57

AUDIENCE: Fear or hope?

play95:59

DAVID DALRYMPLE: Yeah, technology

play96:01

is a double-edged sword.

play96:03

That was fucked up.

play96:04

I mean, human skulls are pretty thick.

play96:07

And even mice skulls are pretty thick.

play96:09

In order to make this happen, you

play96:11

have to have a hole in the skull.

play96:12

And you have to mount your LED in that hole.

play96:17

So there isn't so far any way to do this from a distance

play96:22

without having some sort of physical surgical operation.

play96:26

AUDIENCE: Space is also far away.

play96:29

DAVID DALRYMPLE: Right.

play96:30

If you're inside a building, it probably won't work.

play96:32

And most important people are inside buildings when they're

play96:35

making important decisions.

play96:37

AUDIENCE: [INAUDIBLE] turn right.

play96:39

AUDIENCE: [INAUDIBLE]?

play96:41

AUDIENCE: Yeah.

play96:43

DAVID DALRYMPLE: Yeah, I mean, so right now there

play96:47

isn't any fear of that.

play96:49

But it's certainly something to think about, you know,

play96:54

as technology gets better.

play96:56

You never know when we might cross that threshold.

play96:58

But I think for the next 10 years

play96:59

or so, we're probably pretty safe.

play97:01

MARVIN MINSKY: Well, there are microscopic parasitic worms.

play97:04

And so it would be hard to direct

play97:09

them to go anywhere particular.

play97:10

But you could certainly evolve some that go into the brain

play97:15

and go somewhere without destroying anything important

play97:21

and drop little packages here and there.

play97:23

DAVID DALRYMPLE: There are actually,

play97:25

none that infect humans--

play97:27

actually, I think there is one that infects humans

play97:29

that has a more subtle effect.

play97:31

But there is actually a whole class

play97:33

of parasites which do locate to the brain of larger animals

play97:37

and do in fact cause them to engage in behaviors

play97:41

that are suicidal for the host but

play97:43

beneficial for the parasite.

play97:45

MARVIN MINSKY: That's right.

play97:46

AUDIENCE: There is?

play97:47

MARVIN MINSKY: What's the one that

play97:48

causes some insect to climb up to the top of a tree and--

play97:52

DAVID DALRYMPLE: I don't remember the name of it,

play97:54

but I know that it exists.

play97:56

MARVIN MINSKY: It gets in the brain,

play97:57

and makes it climb the tree, and then jump off or whatever.

play98:03

And that spreads this particular parasite.

play98:06

DAVID DALRYMPLE: Right

play98:07

AUDIENCE: There is [INAUDIBLE].

play98:08

MARVIN MINSKY: So don't go outdoors.

play98:11

AUDIENCE: There is also--

play98:12

what is it-- Toxoplasma gondii, which

play98:15

was on Cracked.com, which causes mice to enjoy

play98:17

the smell of cat urine.

play98:18

DAVID DALRYMPLE: Right.

play98:19

AUDIENCE: Yeah, that's probably one

play98:21

of the ones you're thinking of.

play98:22

So it'd be interesting to develop something

play98:23

like that for humans.

play98:24

AUDIENCE: I think people can actually

play98:26

get it from cats or something, and it makes you--

play98:28

AUDIENCE: Yeah, humans carry it.

play98:30

And it does--

play98:32

DAVID DALRYMPLE: I think that's the one I was thinking of.

play98:33

It as a subtle effect on humans.

play98:34

AUDIENCE: It's pretty benign on humans.

play98:35

AUDIENCE: It's [INAUDIBLE].

play98:41

AUDIENCE: Like it makes girls more flirtatious or something?

play98:44

AUDIENCE: Yeah, I think that's it.

play98:48

So [INAUDIBLE].

play98:53

MARVIN MINSKY: Well, these are all instances

play98:56

of the future dangers of getting better scientific techniques

play99:03

for high school students to do experiments with.

play99:08

AUDIENCE: There is at least a way to control people.

play99:10

Just throw money at them.

play99:13

[INAUDIBLE] you want.

play99:14

MARVIN MINSKY: Just throw what?

play99:15

AUDIENCE: Money.

play99:16

DAVID DALRYMPLE: Throw money at them.

play99:18

AUDIENCE: And they will do whatever you want.

play99:20

MARVIN MINSKY: Oh yeah.

play99:21

DAVID DALRYMPLE: Yeah, actually that's

play99:23

one path that I forgot about that some people do actually

play99:26

pursue.

play99:30

You can just reason directly from what

play99:32

makes people the most money.

play99:34

And that gets you something that makes a lot of money.

play99:39

It's kind of like a person, I guess.

play99:41

AUDIENCE: I think there are mutations in people

play99:43

that are not affected by money.

play99:44

So it would be interesting to try to develop a virus

play99:47

to counteract money.

play99:56

DAVID DALRYMPLE: Yeah, are certainly--

play99:58

AUDIENCE: We can treat it.

play100:00

DAVID DALRYMPLE: Right, we can treat alcoholism

play100:02

with disulfiram.

play100:03

So maybe there is some way that we can treat avarice.

play100:07

AUDIENCE: Won't that lead to the downfall of our economy?

play100:09

DAVID DALRYMPLE: Yup.

play100:10

[LAUGHTER]

play100:12

MARVIN MINSKY: What's this new alcoholism treatment?

play100:15

DAVID DALRYMPLE: Oh, it's not that new,

play100:16

but it's a drug that makes alcohol incredibly repugnant.

play100:22

And so it's sometimes given to people with severe alcoholism

play100:25

so that they are repulsed by alcohol

play100:27

and don't drink it anymore, as long as they

play100:28

continue taking medication.

play100:35

MARVIN MINSKY: There is something

play100:36

like that in my family, because if I drink alcohol more

play100:43

than a small amount, then these little things like ants

play100:47

start crawling on my face.

play100:49

And they're very unpleasant.

play100:52

Any of you have that?

play100:53

AUDIENCE: I do.

play100:55

A lot of Asians have it, actually.

play100:57

MARVIN MINSKY: Really?

play100:58

AUDIENCE: Or maybe it's not-- the one

play101:00

that I have is like you lack alcohol

play101:04

dehydrogenase, the thing that breaks down alcohol.

play101:07

And you have two copies of the gene.

play101:08

And I have, like, one good copy and one bad copy,

play101:11

depending on which you find is good or bad.

play101:14

And so I can drink like a glass.

play101:16

But any more than that, I kind of get really itchy and red.

play101:22

MARVIN MINSKY: Yeah, so I don't have any--

play101:27

AUDIENCE: But the people have two copies

play101:29

where they lack the enzyme, then they have [INAUDIBLE] it's

play101:33

like bad things happen.

play101:35

MARVIN MINSKY: Oh.

play101:48

Well, evolution produces all sorts of strange things.

play102:00

AUDIENCE: Neutral drift tracing is still illegal.

play102:03

DAVID DALRYMPLE: What?

play102:04

AUDIENCE: Neutral drift tracing is still illegal.

play102:06

DAVID DALRYMPLE: What's that?

play102:07

AUDIENCE: It's when an organism goes under neutral drift,

play102:11

it's two organisms competing to not change.

play102:23

MARVIN MINSKY: Yes, why hasn't the net been

play102:24

destroyed by a virus by now?

play102:28

Is there any--

play102:29

AUDIENCE: Why hasn't what been destroyed?

play102:31

DAVID DALRYMPLE: The internet.

play102:32

MARVIN MINSKY: The internet.

play102:32

AUDIENCE: Oh, because it has white blood cells.

play102:37

The people, the system administrators act

play102:39

as white blood cells to their attacks.

play102:41

DAVID DALRYMPLE: Kaspersky.

play102:43

AUDIENCE: And it is largely--

play102:44

DAVID DALRYMPLE: [INAUDIBLE] is the reason

play102:46

the internet is still here.

play102:48

AUDIENCE: Well, it is largely immunized.

play102:51

But it's still infected by parasites.

play102:54

Like, there are botnets the size of which we can only

play102:56

estimate that are sort of parasites

play102:59

running on the internet, interesting [INAUDIBLE]..

play103:04

MARVIN MINSKY: Yeah, I just find it surprising

play103:06

that there hasn't been a really large disaster yet, because--

play103:11

AUDIENCE: Well, it's partially because it's not Xanadu, right?

play103:14

Right it's not Ted Nelson's super-centralized internet.

play103:17

It's a distributed redundant--

play103:19

AUDIENCE: There have been a few attacks

play103:21

that took down large parts of the internet, but nothing quite

play103:24

[INAUDIBLE].

play103:25

MARVIN MINSKY: Say it again?

play103:27

AUDIENCE: There were a few attacks

play103:28

that took down large chunks of the internet in the past.

play103:31

DAVID DALRYMPLE: Yes, I think Croatia's internet was

play103:33

taken down by a woman with a spade who

play103:35

was trying to mine some copper.

play103:39

Oh, this looks like a lot of copper, right?

play103:49

MARVIN MINSKY: Well, I had a Microsoft virus for years,

play103:51

but it never did any harm.

play103:55

It just, if I--

play103:56

I forget what it was called.

play103:59

If I got rid of it, it would come back again.

play104:05

AUDIENCE: People in Croatia still

play104:06

had internet access, though.

play104:07

They could connect via satellite.

play104:09

DAVID DALRYMPLE: This is true.

play104:10

AUDIENCE: So the internet now has

play104:11

sort of enough redundant methods of making connections.

play104:14

DAVID DALRYMPLE: Not quite.

play104:15

People who have satellite links rarely share them.

play104:20

AUDIENCE: Yeah.

play104:21

I guess the thing is that the initial statement is,

play104:23

why hasn't the internet gone down,

play104:25

where the internet is whatever is--

play104:27

it's sort of defined whatever is still connected to anything

play104:29

else, because the internet is defined by its connection.

play104:38

MARVIN MINSKY: Well, why hasn't there

play104:40

been a smallpox epidemic that killed everyone?

play104:46

Because that has happened for quite a few species.

play104:56

AUDIENCE: But all the species currently alive

play104:58

have not been killed by a smallpox epidemic.

play105:00

MARVIN MINSKY: Right, that is correct.

play105:05

DAVID DALRYMPLE: Ah, the anthropic argument,

play105:06

always correct and never quite satisfying.

play105:11

MARVIN MINSKY: I heard an hour-long program this morning

play105:13

about--

play105:16

what's his name on WBUR--

play105:21

about making high-speed trains in California.

play105:25

And it was all very interesting and incredibly expensive.

play105:32

And the current plan is to make one

play105:34

that'll take 30 years to construct, which seems having--

play105:41

that's rather odd, because you can't

play105:43

expect any particular government, including

play105:46

California, to be stable.

play105:48

But I wonder what the point of people--

play105:53

are people really going to travel

play105:55

at great expense and cost when they could have telepresence?

play106:05

DAVID DALRYMPLE: Yeah, moving mass around

play106:06

is kind of a ridiculous way to transfer the information that's

play106:10

inside your brain [INAUDIBLE].

play106:13

MARVIN MINSKY: And at some point, we might just say,

play106:18

shouldn't we ban international travel just because

play106:24

of the danger of a plague?

play106:27

And I think the danger of a plague

play106:28

is going to suddenly increase because of high school students

play106:31

doing science fair projects.

play106:34

Because the nice thing about evolution is it doesn't have,

play106:40

contrary to some beliefs, it doesn't

play106:43

have any intentional agents directing it.

play106:48

But once you can make gene strings in high school,

play106:55

then Darwinian evolution becomes a minority.

play107:02

DAVID DALRYMPLE: I assume you've heard

play107:03

the recent news of Dutch biomedical engineers

play107:08

who produced a version of H1N1 with 60% mortality.

play107:13

MARVIN MINSKY: Oh.

play107:14

And where does he keep it?

play107:15

DAVID DALRYMPLE: In his basement.

play107:16

AUDIENCE: Wait, why did he do this?

play107:18

DAVID DALRYMPLE: For science!

play107:22

AUDIENCE: How do you know the human mortality?

play107:24

MARVIN MINSKY: Did you make that up?

play107:25

DAVID DALRYMPLE: Well, I assume he did it for science.

play107:28

I made up the tone of voice.

play107:29

MARVIN MINSKY: Oh.

play107:37

AUDIENCE: Was this a military-funded project?

play107:39

DAVID DALRYMPLE: No.

play107:40

This was at a hospital, at a hospital research institute.

play107:46

I mean, presumably it was funded by someone

play107:49

who is interested in making a cure for H1N1.

play107:53

And you know, they're trying to make it more obvious

play107:58

when you have one in your test population of chinchillas

play108:02

or whatever their model organism is.

play108:04

AUDIENCE: Ferrets.

play108:05

DAVID DALRYMPLE: Ferrets, that's right, close enough.

play108:08

AUDIENCE: [INAUDIBLE].

play108:16

MARVIN MINSKY: OK, well, next time, bring some questions.

play108:27

Thank you.

play108:29

DAVID DALRYMPLE: Thank you.

Rate This

5.0 / 5 (0 votes)

Related Tags
神经科学人工智能MIT哈佛大卫·达尔里姆普尔认知科学生物物理学光遗传学C. elegans科学发展技术革新
Do you need a summary in English?