2. Three Basic Components or Entities of Artificial Neural Network Introduction | Soft Computing
Summary
TLDRThis video delves into the three fundamental components of artificial neural networks (ANNs): synaptic interconnections, learning rules, and activation functions. It explains various network models, including single and multi-layer feed-forward, single node with feedback, and single and multi-layer recurrent networks. The video also discusses learning methods, such as parameter and structural learning, and covers several activation functions like identity, binary step, bipolar step, ramp, and sigmoidal functions, which are crucial for ANN operations.
Takeaways
- π§ The models of artificial neural networks are specified by three basic components: synaptic interconnections, learning rules, and activation functions.
- π Synaptic interconnections can be of five types: single layer feed forward, multi-layer feed forward, single node with feedback, single layer recurrent, and multi-layer recurrent networks.
- π In a single layer feed forward network, each neuron in the input layer is directly connected to the output layer neurons.
- π Multi-layer feed forward networks include one or more hidden layers between the input and output layers.
- π Single node with feedback involves sending back the output as feedback to modify the computation of the input neuron for better output.
- π Single layer recurrent networks update weights considering both the current input and the previous output.
- π Learning in ANNs is categorized into parameter learning, which updates connection weights, and structural learning, which changes the network's structure.
- π Activation functions determine the output of a neuron based on its input. Common functions include identity, binary step, bipolar step, ramp, and sigmoidal functions.
- π The sigmoidal function, particularly the binary sigmoid function, is widely used in backpropagation neural networks due to its desirable mathematical properties.
- π The derivative of the binary sigmoid function is key for backpropagation as it helps in calculating the gradient during the learning process.
- π The video aims to clarify the concepts of the components of artificial neural networks and encourages viewers to engage with the content for further learning.
Q & A
What are the three basic components of an artificial neural network?
-The three basic components of an artificial neural network are the synaptic interconnections, the training or learning rules, and the activation functions.
What are the different types of interconnections in artificial neural networks?
-The different types of interconnections in artificial neural networks include single layer feed forward networks, multi-layer feed forward networks, single node with its own feedback, single layer recurrent networks, and multi-layer recurrent networks.
How does a single layer feed forward network differ from a multi-layer feed forward network?
-A single layer feed forward network has only an input layer and an output layer, whereas a multi-layer feed forward network includes one or more hidden layers in addition to the input and output layers.
What is the purpose of a single node with its own feedback in a neural network?
-In a single node with its own feedback, the output of the input neuron is used as feedback to validate and modify the computation of the input neuron to improve the output.
How does a single layer recurrent network differ from a multi-layer recurrent network?
-A single layer recurrent network updates weights considering the previous iteration's outputs along with the current input, while a multi-layer recurrent network has multiple layers, including hidden layers, and updates weights similarly across these layers.
What are the two main types of learning in artificial neural networks?
-The two main types of learning in artificial neural networks are parameter learning, which involves updating connection weights, and structural learning, which focuses on changing the network structure such as the number of processing elements or layers.
What is an identity function in the context of activation functions?
-An identity function is an activation function where the output is equal to the input, defined as f(x) = x, and it operates linearly on the input values.
What is a binary step function and how does it work?
-A binary step function is an activation function that outputs 1 if the input value is greater than or equal to 0, and 0 if the input value is less than 0, resulting in a binary output range.
What is a bipolar step function and how does it differ from a binary step function?
-A bipolar step function outputs 1 if the input value is greater than or equal to 0, and -1 if the input value is less than 0, providing a bipolar output range, unlike the binary step function which only outputs 0 or 1.
What is the range of output for the RAM function in activation functions?
-The RAM function outputs 1 for input values greater than 1, linearly increases from 0 to 1 for input values between 0 and 1, and outputs 0 for input values less than 0.
How is the binary sigmoid function defined and what is its range of output?
-The binary sigmoid function is defined as f(x) = 1 / (1 + e^-Ξ»x), where Ξ» is the stiffness parameter and x is the summation term. Its output ranges from 0 to 1.
What is the derivative of the binary sigmoid function and what does it represent?
-The derivative of the binary sigmoid function is f'(x) = Ξ» * f(x) * (1 - f(x)), which represents the rate of change of the function and is used in backpropagation for learning.
Outlines
π§ Fundamental Components of Artificial Neural Networks
This paragraph introduces the three fundamental components of artificial neural networks (ANNs): synaptic interconnections, learning rules, and activation functions. It explains the various models of ANNs based on these components. The synaptic interconnections are detailed with examples of different network architectures, such as single-layer feedforward, multi-layer feedforward, single node with feedback, single-layer recurrent, and multi-layer recurrent networks. The paragraph also briefly touches on the learning aspect of ANNs, differentiating between parameter learning (adjusting connection weights) and structural learning (changing network structure).
π Activation Functions in Neural Networks
This paragraph delves into the various types of activation functions used in ANNs, which are crucial for determining the output of neurons. It describes the identity function, binary step function, bipolar step function, ramp function, and sigmoidal functions. Each function is defined and explained with its mathematical representation and graphical depiction. The paragraph highlights the importance of these functions in shaping the behavior of neural networks, particularly in backpropagation networks, where derivatives of these functions play a significant role in learning.
Mindmap
Keywords
π‘Artificial Neural Network (ANN)
π‘Synaptic Interconnections
π‘Feedforward Network
π‘Recurrent Network
π‘Learning Rules
π‘Activation Functions
π‘Identity Function
π‘Binary Step Function
π‘Sigmoid Function
π‘Backpropagation
Highlights
The video discusses the three basic components of artificial neural networks.
The first component is the model's synaptic interconnections.
There are five types of interconnections: single layer feed forward, multi-layer feed forward, single node with feedback, single layer recurrent, and multi-layer recurrent networks.
A single layer feed forward network has a direct connection between the input and output layers.
A multi-layer feed forward network includes hidden layers between the input and output layers.
A single node with its own feedback validates the output and adjusts computations if necessary.
In a single layer recurrent network, weights are updated considering both the input and previous outputs.
A multi-layer recurrent network has multiple layers with recurrent connections.
The second component is training or learning rules for updating connection weights.
There are two types of learning: parameter learning and structural learning.
Parameter learning updates connection weights using algorithms like gradient descent.
Structural learning focuses on changing the network's structure, such as the number of neurons or layers.
The third component is activation functions.
Identity function is an activation function where the output equals the input.
Binary step function is an activation function that outputs 1 if the input is greater or equal to 0, otherwise 0.
Bipolar step function outputs 1 if the input is non-negative and -1 if it is negative.
The ramp function outputs the input value if it's between 0 and 1, and 1 if it's greater than 1.
Sigmoidal functions are widely used in backpropagation neural networks.
Binary sigmoid function outputs a value between 0 and 1 and is defined by a logistic equation.
Bipolar sigmoid function outputs a value between -1 and 1 and has a specific derivative form.
The video aims to clarify the concepts of artificial neural networks' components.
Encouragement to like, share, subscribe, and turn on notifications for more content.
Transcripts
hi welcome back in the previous video I
have discussed what is artificial neural
network and how it works
in this video I will discuss what are
the three basic components of artificial
neural network
the models of artificial neural networks
are specified by three basic components
or entities
the first one is the models synaptic
interconnections
second one is the training or learning
rules adopted for updating and adjusting
the connection weights
third one is the activation functions we
will try to discuss each of these
components one by one
the first one is the connections
in artificial neural network we have
multiple number of neurons
interconnected with one another in
multiple number of layers like input
layer hidden layer and output layer and
so on
there exists basically five types of
interconnections
the first one is a single layer feed
forward Network
second one is multi-layer feed forward
Network
third one is single node with its own
feedback
fourth one is single layer recurrent
Network fifth one is multi-layer
recurrent Network
this is how the single layer feed
forward Network looks like in this case
we have two layers the first one is the
input layer and the second one is the
output layer the input layer has one or
more neurons and output layer can have
one or more neurons in this case
and each of the neurons are present in
the input layer are directly connected
with the output layer neurons that is X1
is connected to y1 Y2 and Y M in this
case similarly X2 is connected to y1 Y2
ym and so on
so this is also called as fully
connected single layer feed forward
Network in this case the input layer
receives the inputs from the input input
and then that input is forwarded to the
output layer and the output will be
received at this particular layer
this is how the multi-layer field
forward Network looks like it has input
layer and output layer along with input
and output layer it has one or more
hidden layers in this case
the input layer has one or more input
neurons the output layer has again one
or more output neurons based on the
problem what we are solving and hidden
layer can have again one or more the
neurons in this case
the input layer and neurons are
connected to the hidden layer neurons
and the hidden layer neurons are
connected to Output layer neurons in
this case
coming back to the third one that is a
single node with the own feedback in
this case we have only one layer that is
the input layer and the output of input
neuron is nothing but the output here
so uh we we want to validate whether
this particular output is acceptable or
not if it is acceptable it's okay if it
is not acceptable we will send back the
feedback and then we will modify the
computation part of this particular
input neuron so that we will get the
better output in this case
the next one is uh the single layer
recurrent Network in this case we have
input layer and output layer
while modifying or updating this
particular weights rather than just
considering the input input as well as
the previous weights we consider the
outputs as well in this case the output
of the previous iteration and then the
input will be considered to update this
particular weights over here so that is
what is called a single layer recurrent
Network
multi-layer recurrent network has
similar architecture but we have what is
that called as multiple number of layers
in this case along with input and output
layer we have what is that called as the
hidden layer also in this case
so this these are the different type of
connections what we can have in the
artificial neural network
coming back to the second component that
is the learning
the main part of artificial neural
network is its capability to learn
through the past experience
there are mainly two kind of learnings
in artificial neural network the first
one is called as parameter learning and
second one is called as structural
learning parameter learning is nothing
but here we update the connection
weights with some Logic for example we
use either gradient descent or backward
protection algorithm and we update this
particular bits that is what is called
parameter learning structure learning
focuses on the change in network
structure which includes the number of
processing elements or the number of
layers and so on so that is nothing but
the structure learning the third
component of the artificial neural
network is the activation functions
there are several activation functions
exists one is the identity function
this identity function is defined like f
of x is equal to X
here x is nothing but the summation term
I think we have already discussed in the
previous video like at each neuron there
are two things takes place one is called
as summation and another one is called
as applying the activation function
so once you get the summation term on
the top of summation term we apply the
activation function here so once you
apply the activation function we if you
get the same value that is nothing but
the identity function here so as and
when the value of X increases the f of x
also increases with the same value here
it's a linearly you can say
the second one is a binary step function
which is defined as something like this
if the value of x is greater than
equivalent to 0 the value is 1 if the
value of x is less than 0 it is 0 in
this case
the value of this particular function
will range uh will be either 0 or 1 in
this case
that is shown in this particular graph
over here
the third activation function is the
bipolar step function
which is defined as something like this
if the value of x is greater than or
equal to 0 it is 1 if the value of x is
less than 0 it is minus 1 so there are
two possible outputs are there one is
plus one another one is minus one in
this case if the value of x is greater
than 0 it is 1 if it is value of x is
less than 0 it is minus 1 in this case
the next activation function is the ram
function which is defined something like
this if the value of x is greater than 1
it is 1 you can see here from here
onwards the value of x is greater than
one so as and when the value of x is
greater than 1 the f of x is equal to 1
here that is noted here
if the value of x is in the range of 0
to 1 the f of x is equal to X so as and
when the value of X increases the f of x
increases linearly that is f of
0.5 is equal to 0.5 here so similarly it
will be increased increased in linearly
if the value of x is less than 0 the f
of x is equivalent to 0 here that is the
x is less than 0 f of x is equal to 0 in
this case
coming back to the last activation
function that is called as sigmoidal
functions they are widely used in Back
propagation artificial neural networks
there are two types of sigmoidal
functions are there one is called as
binary sigmoid function another one is
called as bipolar sigmoid function
binary Sigma function is defined
something like this that is f of x is
equivalent to 1 divided by 1 plus e
raised to minus Lambda X where Lambda is
the stiffness parameter and X is the
summation term here
and it is also called as the logistic
sigmet function or unipolar sigmoid
function
The Peculiar property of this by binary
Sigma function is when you do the
derivative or when you take the
derivative of f of x that is f Dash X is
equal into Lambda that is a stiffness
curve stiffness value multiplied by f of
x into 1 minus f of x that is what we
will get when you do the derivative of
this particular sigmoid function in this
case and the value of f of x will range
in the range from 0 to 1 in this case
bipolar sigmoid function is defined
something like this that is f of x is
equivalent to 2 divided by 1 plus e
raised to minus Lambda X minus 1 in this
case which is equivalent to 1 minus E
raised to minus Lambda x divided by 1
plus C raised to minus Lambda X here
uh the value of this particle f of x
will range in minus 1 to plus 1 and the
derivative of this particle f of x is
nothing but Lambda divided by 2
multiplied by 1 plus f of x multiplied
by 1 minus f of x over here
so in this video I have discussed uh
what are the different components of
artificial neural network that is the
first one is connection second one is
the learning and third one is the
activation functions
I hope the concept is clear if you like
the video do like and share with your
friends press the Subscribe button for
more videos press the Bell icon for
regular updates thank you for watching
5.0 / 5 (0 votes)