Deep Learning(CS7015): Lec 2.1 Motivation from Biological Neurons

NPTEL-NOC IITM
23 Oct 201807:32

Summary

TLDRThis lecture from CS 7015 delves into the foundational concepts of deep learning, starting with the biological neurons that inspired artificial neurons. It covers the McCulloch-Pitts neuron model and introduces perceptrons, including a learning algorithm and its convergence. The lecture also explores multilayer perceptron networks and their representational capabilities. The speaker provides a simplified yet insightful overview of how the human brain processes information through a hierarchical structure of interconnected neurons, emphasizing the parallel processing and division of labor that underpins neural networks.

Takeaways

  • 🧠 The course CS 7015 focuses on deep learning, covering topics like the McCulloch Pitts Neuron, Thresholding Logic, Perceptrons, and their learning algorithms.
  • 🚀 The concept of an artificial neuron, fundamental to deep neural networks, is inspired by biological neurons found in the brain.
  • 🌿 The term 'neuron' was coined in the 1890s to describe neural processing units or cells in our brain.
  • 🧬 Biological neurons receive inputs via dendrites, process them in the soma, and transmit outputs through the axon.
  • 🔗 The strength of the connection between neurons is determined by the synapse.
  • 🧐 Neurons decide whether to 'fire' based on inputs, which can lead to an action like laughter if the input is deemed funny.
  • 🌐 The brain operates through a massively parallel interconnected network of approximately 100 billion neurons.
  • 🔄 There's a division of work among neurons, with some specializing in processing visual data, while others handle different types of sensory information.
  • 📈 Neurons are arranged in a hierarchy, with higher layers in the visual cortex processing increasingly complex features of visual information.
  • 📊 Layer 1 of the visual cortex detects edges and corners, layer 2 groups these into feature groups like the nose or eyes, and layer 3 recognizes these as part of a whole object, like a human face.
  • ⚠️ The explanation provided is an oversimplification of how the brain works, sufficient for the course on deep learning but not exhaustive of neurobiology.
  • ✨ The course emphasizes understanding the simplified model of neural processing for the purpose of studying deep learning algorithms and their applications.

Q & A

  • What is the main focus of CS 7015 lecture 2?

    -The lecture focuses on the McCulloch Pitts Neuron, Thresholding Logic, Perceptrons, a Learning Algorithm for Perceptrons, the convergence of this algorithm, Multilayer networks of Perceptrons, and the Representation Power of perceptrons.

  • What is the inspiration behind the term 'artificial neuron' used in deep neural networks?

    -The term 'artificial neuron' is inspired by biology, specifically from the brain, where the term 'neuron' was coined for neural processing units or cells in the brain.

  • What is the function of the dendrite in a biological neuron?

    -The dendrite is used to receive inputs from other neurons, serving as the input point for the neuron.

  • What is a Synapse and its role in biological neurons?

    -A Synapse is the connection between neurons that determines the strength of the connection, playing a crucial role in the transmission of signals between neurons.

  • What is the role of the SOMA in a biological neuron?

    -The SOMA acts as the central processing unit of the neuron, where the processing of received inputs takes place.

  • How does the axon contribute to the function of a biological neuron?

    -The axon carries the output from the neuron to other sets of neurons after the SOMA has processed the inputs.

  • What is the significance of the neuron firing in the context of the script?

    -Neuron firing represents the decision-making process of the neuron, such as determining if an input is funny enough to evoke laughter.

  • How many neurons are estimated to be in the human brain?

    -There are approximately 100 billion neurons in the human brain.

  • What does the term 'massively parallel interconnected network' refer to in the context of the brain?

    -It refers to the vast network of neurons that work in parallel, interconnected with each other to process information.

  • What is the division of work among neurons as described in the script?

    -The division of work means that different neurons are responsible for processing different types of data, such as visual data or speech, each playing a specific role.

  • Can you explain the hierarchical arrangement of neurons in the brain as mentioned in the script?

    -The hierarchical arrangement refers to the organization of neurons in layers, where information is processed through multiple levels before reaching the final output, such as in the visual cortex where layers V1, V2, and AIT form a hierarchy for processing visual information.

  • What is the disclaimer provided by the lecturer regarding the explanation of the human brain?

    -The lecturer acknowledges that their understanding of the human brain is limited and that the explanation provided is a very oversimplified version, sufficient for the course but not a comprehensive representation of how the brain works.

Outlines

00:00

🧠 Introduction to Deep Learning and Biological Neurons

This paragraph introduces the second lecture of the CS 7015 deep learning course, focusing on foundational concepts such as the McCulloch Pitts Neuron, Thresholding Logic, Perceptrons, and the Learning Algorithm for Perceptrons, including its convergence. It also previews the discussion on Multilayer networks of Perceptrons and their Representation Power. The lecture begins with a historical context, tracing back to the 1880s and the coining of the term 'neuron' in the 1890s. The inspiration behind artificial neurons in deep neural networks is attributed to biological neurons in the brain. The structure and function of a biological neuron are detailed, including dendrites for receiving inputs, the Synapse that determines connection strength, the SOMA for processing, and the axon for transmitting outputs. A simplified illustration of how neurons process information and the concept of a massively parallel interconnected network of neurons in the brain are also discussed, highlighting the division of labor and hierarchical arrangement of neurons, with a specific example of how neurons might process humor and laughter.

05:02

👁️ Hierarchical Processing in the Visual Cortex

The second paragraph delves into the hierarchical processing of information in the visual cortex of the brain. It describes the flow of visual information from the retina through various layers, ultimately reaching the spinal cord to elicit a physical response, such as muscle movement. The focus is on three specific layers: V1, V2, and AIT, which form a hierarchy responsible for progressively advanced levels of visual processing. Layer 1 is responsible for detecting edges and corners, while layer 2 groups these basic features into more complex shapes, such as facial features. Layer 3 synthesizes this information to recognize higher-level objects, such as a human face. The explanation emphasizes the oversimplified nature of the description, acknowledging the complexity of the human brain and stating that the provided information is sufficient for the course's objectives, which are not centered on biology or neural processing.

Mindmap

Keywords

💡Deep Learning

Deep Learning is a subset of machine learning that focuses on artificial neural networks with multiple layers, enabling the model to learn and make decisions based on complex patterns. It is the overarching theme of the video, as the course CS 7015 is centered around this concept. The script discusses various components of deep learning, such as artificial neurons and perceptrons, which are foundational to understanding deep neural networks.

💡McCulloch Pitts Neuron

The McCulloch Pitts Neuron is a theoretical model of a neural network that was introduced in the 1940s by Warren Sturgis McCulloch and Walter Pitts. It is a simplified representation of a biological neuron and is considered the first mathematical model of computation. In the script, it is mentioned as a computational model that inspired the creation of artificial neurons in deep learning.

💡Thresholding Logic

Thresholding Logic is a method used in perceptrons, which are early models of artificial neural networks. It involves setting a threshold value that determines whether the perceptron will 'fire' or activate based on the inputs it receives. The script briefly mentions this concept, indicating that it is a part of the learning process for perceptrons and contributes to the decision-making capability of neural networks.

💡Perceptrons

Perceptrons are the simplest type of artificial neural network, capable of linear classification. They consist of a set of input values, weights, a bias, and an activation function that determines the output. The script discusses perceptrons as a fundamental building block in the study of deep learning and neural networks, emphasizing their role in the development of more complex models.

💡Learning Algorithm

A Learning Algorithm in the context of neural networks refers to a set of procedures that allow the network to modify its weights and biases based on the input data, with the goal of minimizing the error between the predicted and actual outputs. The script mentions a learning algorithm for perceptrons, highlighting its importance in the training process of neural networks.

💡Convergence

In the script, convergence refers to the property of a learning algorithm where it eventually reaches a consistent state or solution after a number of iterations. It is an important concept in understanding the efficiency and effectiveness of learning algorithms for perceptrons and other neural network models.

💡Multilayer Network

A Multilayer Network, as mentioned in the script, is a neural network that consists of multiple layers of perceptrons or artificial neurons. These layers allow for the processing of complex patterns and features, enabling the network to learn from and make decisions based on intricate data representations.

💡Representation Power

Representation Power is the ability of a neural network to represent and model complex relationships and patterns in data. The script discusses the representation power of perceptrons, indicating the capacity of these simple networks to capture and learn from various types of data.

💡Biological Neurons

Biological Neurons are the fundamental units of the nervous system, responsible for transmitting signals throughout the body. The script begins with a discussion of biological neurons, drawing parallels between their structure and function and that of artificial neurons in deep learning. This serves as a foundation for understanding the inspiration behind the design of artificial neural networks.

💡Synapse

A Synapse is a junction between two neurons where information is passed from one to another. In the script, the synapse is described as the connection point between neurons that determines the strength of their interaction, which is analogous to the weights in artificial neural networks.

💡SOMA

SOMA, short for soma plasm, is the cell body of a neuron, which contains the nucleus and is responsible for processing the inputs it receives. In the context of the script, the SOMA is likened to the central processing unit of a biological neuron, analogous to the processing function of an artificial neuron in a neural network.

💡Axon

The Axon is a long, slender projection of a neuron that carries electrical impulses away from the neuron's cell body. In the script, the axon is mentioned as the component responsible for transmitting the output of a neuron to other neurons, similar to how the output of an artificial neuron is passed on in a neural network.

💡Hierarchy

Hierarchy, in the context of the script, refers to the layered structure of the brain's neural networks, where information is processed through multiple levels before a response is generated. This concept is crucial for understanding how complex decisions and actions are coordinated within the brain and is mirrored in the hierarchical structure of deep neural networks.

💡Visual Cortex

The Visual Cortex is the area of the brain responsible for processing visual information. The script uses the visual cortex as an example to illustrate how hierarchical processing occurs in the brain, with different layers responsible for detecting edges, feature groups, and higher-level objects, which is a simplified model of the complex process of visual perception.

Highlights

Introduction to lecture 2 of CS 7015 on deep learning.

Discussion on McCulloch Pitts Neuron, Thresholding Logic, Perceptrons, and Learning Algorithm for Perceptrons.

Exploration of the convergence of the Perceptron Learning Algorithm.

Introduction to Multilayer network of Perceptrons.

Analysis of the Representation Power of perceptrons.

Historical context starting from the 1880s on biological neurons.

The fundamental unit of deep neural networks, the artificial neuron, and its biological inspiration.

Detailed explanation of biological neurons including dendrites, synapses, soma, and axon.

The concept of a neuron firing as a response to inputs.

Illustration of the massively parallel interconnected network of neurons in the brain.

Division of work among neurons and their hierarchical arrangement.

Example of how different neurons respond to different types of humor.

The process of information relay through multiple layers in the brain.

Hierarchy of layers in the visual cortex and their respective functions.

Explanation of how edges, feature groups, and objects are processed in the visual cortex.

Disclaimer on the oversimplified explanation of the brain's workings for the course.

Conclusion of Module 1 on biological neurons and their relevance to deep learning.

Transcripts

play00:12

So, welcome to lecture 2 of CS 7015 which is the course on deep learning.

play00:16

So, we will talk about McCulloch Pitts Neuron, Thresholding Logic, Perceptrons and a Learning

play00:22

Algorithm for Perceptrons and talk about the convergence of this algorithm, and then we

play00:26

will talk about Multilayer network of Perceptrons and finally, the Representation Power of perceptrons.

play00:31

So, let us start module 1, which is on biological neurons.

play00:36

So, remember during the history we had started all the way back in the 1880s when we spoke

play00:41

about biological neurons.

play00:42

So, we will just start there spend a few minutes on it and then go on to the computational

play00:46

models which is McCulloch Pitts neuron.

play00:48

So, now this is a course on deep learning.

play00:51

So, we are going to talk about deep neural networks now the most fundamental unit of

play00:56

a deep neural network is something known as an artificial neuron.

play01:00

And the question is, why is it called a neuron right, where does the inspiration come from

play01:04

right.

play01:05

So, we already know that the inspiration comes from biology and more specifically it comes

play01:10

from the brain, because we saw that way back in the 1890s, this term neuron was coined

play01:16

for neural processing units or the cells in our brain right.

play01:20

So, now before we move on to the computational neurons or the artificial neurons, we will

play01:24

just see the biological neurons in a bit more detail and then we will move on from there

play01:29

right.

play01:30

So, this is what a typical biological neuron looks like.

play01:33

So, here actually there are 2 neurons.

play01:36

This portion is called the dendrite, so it is used to receive inputs from all the other

play01:43

neurons right.

play01:44

So, that is the place where the input comes in.

play01:47

Then remember we said that in 1950s we discovered that these neurons are actually discrete cells

play01:54

and there is something which connects them.

play01:55

So, that connection is called a Synapse and it decides the strength of the connection

play02:00

between these two neurons.

play02:01

So, there is an input, there is some strength to the connection.

play02:05

Then once this neuron receives inputs from various other neurons, it starts processing

play02:10

it right, so that is the central processing unit which is called the SOMA, and once it

play02:13

is done this processing it will, it is ready to send its output to other set of neurons

play02:18

right.

play02:19

So, that output is carried on by the axon.

play02:20

So, we have inputs, we have some weights attached to the input, we have some processing and

play02:26

then an output right.

play02:27

So, that is what a typical biological neuron looks like.

play02:31

And let us see a very very cartoonish illustration of how this works right, how the neuron works.

play02:36

So, our sense organs interact with the outside world and then they pass on this information

play02:41

to the neuron and then the neuron decides whether I need to take some action.

play02:45

In this case the action could be whether I it should laugh or not right, whether the

play02:48

input is really funny enough to evoke laughter.

play02:50

So, if that happens this is known as something that the neuron has fired.Ok.

play02:54

Now, of course, in reality it is not just a single neuron which does all this.

play02:59

There is a massively parallel interconnected network of neurons right.

play03:03

So, you see a massive network here.

play03:06

Now the neurons in the lower level site, so these neurons.

play03:11

They actually interact with the sensory organs, they do some processing based on the inputs,

play03:16

so they decide whether I should fire or not.

play03:19

And if they fire they transmit this information to the next set of neurons right and this

play03:22

process continues till the information is relayed all the way back and then finally,

play03:26

you decide whether you need to take any action or not, again in which this case it should

play03:30

be laughter right, so that is how it works.

play03:33

And when I say massively parallel interconnected network I really mean it, because there are

play03:38

10 raise to eleven which is roughly 100 billion neurons in the brain right.

play03:43

Ok.

play03:44

Now, this massively parallel network also ensures that there is some division of work.

play03:48

Now what do you mean by that is not that every neuron is responsible for taking care of whether

play03:52

I should laugh or not or not every neuron is responsible for processing visual data,

play03:57

some neurons may possess visual data, some neurons may possess speeds data and so on

play04:01

right.

play04:02

So, there is this division of work, every neuron has a certain role to play.So for example,

play04:06

in this cartoonish example that we took right.

play04:10

So, there might be this one neuron which fires if the visuals are funny right whatever you

play04:14

are seeing is funny.

play04:15

There will be one neuron which finds Sheldons speech to be funny right, the way he speaks,

play04:20

so that might be funny and there might be another neuron which actually finds the dialogue

play04:24

content to be funny right.

play04:27

And now all of this pass on the information to the next level and this guy would fire

play04:30

if at least 2 of these 3 inputs are funny right.

play04:33

So, that means, I have some threshold based on which I decide whether to react or not

play04:37

right, if it is really funny then only I laugh it; otherwise I will not laugh right.

play04:41

So, the neurons in the brain as was obvious in the previous slide are arranged in a hierarchy

play04:47

and I will take a more realistic example, where we look at the visual cortex.

play04:51

So, is this is the portion of the brain, which is responsible for processing visual information

play04:55

right.

play04:56

So, as you see here you have our retina from where the information starts flowing, and

play05:01

it goes through various levels right.

play05:03

So, you see, you follow the arrows and you will see there are several levels; there is

play05:06

one level here, then another here another here and so on right.

play05:09

So, it is again as I was trying to illustrate in that cartoon the information is relayed

play05:13

through multiple layers, and then it goes all the way back to the spinal cord which

play05:19

decides that, in this case I need to move the muscle right.

play05:21

So, that is what is being decided here right.

play05:23

So, the information flows through a hierarchy of layers.

play05:26

And in this particular case I am going to focus on these three circled layers which

play05:31

are V1, V2 and AIT right.

play05:35

So, these actually form a hierarchy and let us see what this hierarchy does right.

play05:40

So, at layer 1 you detect edges and corners.

play05:43

So, I am looking at you all, I just see some dots and some shapes, so that is what layer

play05:48

1 recognizes.

play05:49

I just recognize some edges and some dots and so on.

play05:52

Now, layer 2 tries to group all of these together and come up with some meaningful feature groups

play05:58

right.

play05:59

So, it realizes oh these two edges actually form the nose, these two dots actually form

play06:03

the eyes and these two edges actually form the mouth right.

play06:06

So, that is slightly higher level of processing that it is doing and then layer 3 further

play06:12

collects all this and leads to higher level objects right.

play06:16

So, now it is realizing all these things put together is actually a human face right.

play06:19

So, you add edges and circles or dots, then you had some feature groups and then the feature

play06:25

groups combine into objects right.

play06:26

So, that is how this hierarchy processes.

play06:29

So, here is a disclaimer, I understand very little about how the human brain works right

play06:34

and what you saw is a very oversimplified explanation of how the brain works right.

play06:38

What I told you is there is an input a layer of networks which does a network, which has

play06:44

many layers which does some processing and then you have an output right; that is the

play06:47

very simplistic view that I gave you.

play06:49

This is an oversimplified version, but this version suffices for everything that we need

play06:54

for this course right.

play06:55

This is not a biology or a neural processing course right.

play06:57

So, it is enough for this course.

play06:58

So, that is where we will end module 1.

Rate This

5.0 / 5 (0 votes)

Related Tags
Deep LearningNeural NetworksBiological NeuronsArtificial NeuronsMcCulloch PittsThresholding LogicPerceptronsLearning AlgorithmMultilayer NetworksRepresentation Power