Backpropagation Solved Example - 4 | Backpropagation Algorithm in Neural Networks by Mahesh Huddar
Summary
TLDRThis video tutorial explains the backpropagation algorithm using a neural network example with three layers. It details the process of input propagation from the input to the output layer, calculates the error, and updates the weights and biases accordingly. The video also covers the logistic activation function, error calculation, and weight update formulas, guiding viewers through each step to understand how backpropagation minimizes error in neural networks.
Takeaways
- 🌐 The video discusses the backpropagation algorithm using a neural network with an input layer, a hidden layer, and an output layer.
- 🔢 The neural network example has two neurons in each layer, with given weights and biases for the layers.
- 📈 The input values are 0.5 and 0.10, and the goal is to propagate these inputs through the network to calculate the output.
- 🧮 The net input for each neuron is calculated by summing the product of the weights and inputs, plus the bias.
- 📉 The logistic activation function is used to transform the net input into the output of the neurons.
- 🔄 The process is repeated for each neuron in the hidden and output layers to calculate their respective outputs.
- 📊 The error is calculated based on the difference between the target output and the calculated output using a specific formula.
- 🔧 The weights and biases are updated based on the error, using the delta (change in error) and the learning rate.
- 🔄 The process of calculating the output, error, and updating weights is repeated for each epoch until the error is minimized or reaches an acceptable level.
- 📚 The video aims to clarify the concept of the backpropagation algorithm for neural networks.
Q & A
What is the purpose of the backpropagation algorithm?
-The backpropagation algorithm is used to train neural networks by adjusting the weights and biases to minimize the error between the predicted output and the actual target values.
How many layers does the neural network in the example have?
-The neural network in the example has three layers: an input layer, a hidden layer, and an output layer.
How many neurons are there in each layer of the given neural network?
-There are two neurons in the input layer, two neurons in the hidden layer, and two neurons in the output layer.
What are the roles of W1, W2, W3, and W4 in the neural network?
-W1, W2, W3, and W4 are weights associated with the neurons in the hidden layer.
What activation function is used in this example?
-The logistic activation function is used in this example, which is defined as f(x) = 1 / (1 + e^(-x)).
How is the net input for a neuron calculated?
-The net input for a neuron is calculated as the sum of the products of the neuron's weights and the corresponding inputs plus the bias.
What is the formula used to calculate the error in the output layer?
-The error in the output layer is calculated using the formula: error = 1/2 * (Target - Calculated Output)^2.
How is the weight update performed in the backpropagation algorithm?
-The weight update is performed by adjusting the weights based on the error, using the formula: new weight = old weight + learning rate * error * output of the previous layer.
What is the significance of the biases B1, B2, B3, and B4?
-B1 and B2 are biases for the hidden layer neurons, while B3 and B4 are biases for the output layer neurons. They are used to adjust the net input and help in the activation of the neurons.
How does the backpropagation algorithm handle the error?
-The backpropagation algorithm calculates the error at the output layer, then propagates this error backward to update the weights and biases of the network.
What is the role of the learning rate in the backpropagation algorithm?
-The learning rate is a hyperparameter that determines the step size at each iteration while moving toward a minimum of a loss function. It is used to update the weights during the training process.
Outlines
🧠 Introduction to Back Propagation Algorithm
The paragraph introduces the back propagation algorithm using a neural network example with three layers: input, hidden, and output. Each layer has two neurons. The input values are provided, and the weights (W1-W8) and biases (B1-B4) are mentioned. The process of propagating the input from the input layer to the output layer is explained. The focus is on calculating the net input and applying the logistic activation function to determine the output of the hidden layer neurons (H1 and H2). The logistic function is defined, and the calculations for the output of H1 and H2 are detailed. The process involves summing the products of weights and inputs, adding biases, and applying the logistic function to find the neuron outputs.
🔄 Updating Weights and Calculating Errors
This section describes the process of updating the weights and biases for the output layer neurons after calculating the output at O1 and O2 using the logistic activation function. The error calculation between the target and the calculated output is explained, using a formula involving the square of the difference between the target and output, divided by 2. The error with respect to each output neuron is calculated, and the weights (W5-W8) and biases (B3-B4) are updated using the learning rate and the error values. The formula for updating the weights involves multiplying the learning rate by the error, the output of the previous layer neuron, and adjusting the bias similarly. The process is repeated for both output neurons.
🔄 Continuing Weight Updates and Error Minimization
The final paragraph discusses the continuation of the back propagation process. It covers updating the weights and biases for the hidden layer neurons (H1 and H2) using the errors calculated at the output layer. The error for each hidden neuron is determined by the contribution of the errors from the output layer, weighted by the connections to the hidden neurons. The weights (W1-W4) and biases (B1-B2) are updated using a similar method to the output layer, involving the learning rate and the calculated errors. The paragraph concludes by emphasizing the iterative nature of back propagation, where the process of propagating inputs, calculating errors, and updating weights is repeated until the error is minimized or reaches an acceptable level. The video ends with a call to action for viewers to like, share, and subscribe for more content.
Mindmap
Keywords
💡Back Propagation
💡Neural Network
💡Hidden Layer
💡Output Layer
💡Weights
💡Bias
💡Logistic Activation Function
💡Net Input
💡Error
💡Learning Rate
💡Epoch
Highlights
Introduction to the backpropagation algorithm using a specific example.
Description of the neural network structure with three layers: input, hidden, and output.
Explanation of the number of neurons in each layer.
Introduction to the weights and biases of the neural network.
Propagation of input from the input layer to the output layer.
Calculation of the net input for the hidden layer neurons.
Application of the logistic activation function to calculate the output of hidden layer neurons.
Formula for the logistic activation function.
Calculation of the output at H1 using the logistic function.
Calculation of the output at H2 using the logistic function.
Propagation to calculate the output at O1 and O2 using the logistic function.
Calculation of the error based on the target and the calculated output.
Update of the weights with respect to the output layer neurons.
Calculation of the error with respect to O1 and O2 for weight update.
Formula for updating the weights during backpropagation.
Update of the weights W5, W6, B3, W7, W8, and B4.
Calculation of the error at H1 and H2 for updating the hidden layer weights.
Update of the weights W1, W2, B1, W3, W4, and B2.
Completion of one epoch and the process of repeating until the error is minimized.
Summary of the backpropagation algorithm and its practical applications.
Transcripts
welcome back in this video I will
discuss back propagation algorithm with
the help of Sol example this is the Sol
example number four link for other
examples is given in the description
below in this case we have been given a
neural net with three layers input layer
hidden layer and output layer there are
two neurons in input layer two neurons
in the hidden layer and two neurons in
the output layer 05 and 010 is the input
we have been given the weights W1 W2 W3
W4 are the weights with respective to
Hidden layer neurons W5 W6 W7 and W8 are
the weights with respective to Output
layer neurons B1 and B2 are the bias
with respective to Hidden layer neurons
B3 B4 are the bias with respective to
Output layer neurons so in this case we
need to propagate this input from input
layer neuron to Output layer neuron and
then we need to calculate the error
based on the error we need to update the
weight here now the question comes in
front of us like how to propagate the
input from input layer neuron to Output
layer uh neuron in this case so first
what we do is we will calculate the
output at Hidden layer neurons to
calculate the output at Hidden layer
neurons first we need to calculate the
net input on the top of that net input
we need to apply the logistic activation
function so that we will calculate the
output of hidden layer neurons the
logistic activation function looks
something like this f of x is equal to 1
/ 1 + e to 2 - x where X is the net
input here now output at H1 is equal to
F of net input net input how to
calculate it is the sum of
multiplication of weight and input here
so that is nothing but at H1 uh W1 i1 +
W2 I2 plus B1 into 1 here so that is
what the net input on the top of this
net input what we are applying we are
applying the logistic activation
function here now once you put the
values here
uh you will get F of
3775 uh that is nothing but 1 divid 1 +
eus 3775 and once you solve it you will
get output at H1 is equal to
59327 here similarly we need to
calculate the output at H2 here again we
need to calculate the net input that is
nothing but you can see here uh W3 i1
plus W4 is coming towards H2 here so W4
I2 plus B2 into 1 here here and then
once you solve it you will get
3925 F of 3925 is equal to 1 / 1 + e to
-. 3925 again once you solve this
equation you will get 59 689 here so
this is the output at H2 here now once
you calculate the output at uh H1 and H2
now we need to calculate the output at
o1 and O2 here again the same uh
activation function we are going to use
that is the logistic activation ation
function and the concept is same first
we need to calculate the net input once
you calculate the net input we need to
apply the activation function again what
is the net input in this case the net
input is you can see here towards o1 W5
is coming and W6 is coming so W5
multiplied by the output of H1 that is
nothing but out of H1 here W6 multiplied
by output of H2 here that is nothing but
out of H2 plus B3 is coming here so B3
multiplied by 1 again once you solve it
you will get F of 1.10 591 uh that is
nothing but output at1 is equal to
75137 here similarly we need to
calculate the output at O2 here uh if
you notice towards O2 W uh 8 is coming
and W7 is coming so W7 multiplied by
output at H1 that's the first one W8
multiplied by output at H2 uh plus B4 is
coming towards O2 so B4 mli by 1 here
once you solve it you will get F of 1.22
492 uh that is nothing but
77293 here so output at o1 and O2 we
have calculated now once you calculate
the output at o1 and O2 we have already
been given the target here based on the
Target and the calculated output we need
to calculate the error here to calculate
the error we use this equation error is
equalent to half of squared the
difference between Target and calculated
out output so 1 by 2 multiplied by
Target is how much at this o1 T1 what is
the calculated output out of o1 bracket
square + 1 by 2 T2 is the target at O2
minus calculated output is out of o2
bracket Square again we have already
calculated these two values T1 and T2
are already given to us once you put all
these values we will get e is equal to
29
8371 this is the error at output layer
neuron in this case now once you
calculate the error at output layer
neurons the next step is to update the
weights with respect to Output layer
neurons that is nothing but W5 W6 W7 W8
B3 and B4 here now how to update this
particular weights is uh first we need
to calculate the error uh with
respective to o1 and with respective to
O2 here because overall error we have
calculated are the overall error with
respective to o1 and O2 now we need to
calculate the contribution of o1 and
contribution of o2 here based on that we
can update these particular weights so
once you calculate the contribution of
o1 you can update W5 W6 and then B3 and
once you calculate the contribution of
error at oo you can update W7 W8 and
this B4 we can update here so that's the
reason first we calculate the
contribution of error with respect to o1
that is delta1 is equal to T1 minus
out1 out1 that is nothing but Target at
uh o1 minus the output at o1 calculated
output multiplied by the calculated
output multiplied by 1 minus calculated
output here so this is a standard
formula in B pration algorithm now once
you put the values we have already
calculated out o1 E1 is given to us you
will get the delta1 that is nothing but
-0 1385 Z here so this is the error with
respect to O here now once you calculate
error with respect to1 next step is to
update the associated weights as said
earlier W5 W6 and B3 can be updated here
W5 is equal to W5 that's a old weight
plus the learning rate multiplied by
delta1 multiplied by out H1 so out H1 is
known to us uh delta1 is known to us W5
is known to us the learning rate given
in this problem definition is 0.5 so all
those values are known to us W5 is
equalent to what 3 5892 that is the
modified weight here so previously it
was 40 now it is 35892 here similarly W6
W6 is equal to W6 plus learning rate
multiplied by delta1 multiplied by out
H2 here once you put all these values
you will get 4867 here similarly we need
to update B3 here B3 is equal to B3 Plus
in the learning rate multiplied by
delta1 multiplied by this input that is
nothing but one here you will get
53075 previously it was
60 uh similarly we have to calculate the
Delta O2 here Delta O2 is nothing but T2
minus out2 out2 * 1 - out2 we need to
put all the values you will get delta2
here once you calculate the Delta O2 you
can update W7 W8 and B4 W7 is equal to
W7 plus learning rate delta2 out H1 here
because this one is
being updated with respect to H1 so we
need to take out H1 here so once you put
all the values you will get
51130 similarly W8 is calculated and B4
is calculated here now we have updated
all the weights with respect to Output
layer neuron the next step is to update
the weights with respect to Hidden layer
neurons for that reason we need to
calculate the error at H1 and error at
H2 here to calculate error at H1 and H2
we can use this formula that is Delta H1
that's the error at H1 is equal to error
at o1 that is the error whatever we have
calculated at o1 multiplied by W5 that
is this one this is the uh
W5 and uh plus error at O2 multiplied by
W7 so these are the two things we need
to multiply with respect to these errors
here multiplied by out H1 whatever the
out output of H1 is there multiplied by
1 minus out of H1 over here so once you
put all the these values you will get
minus
877 uh once you calculate the error at
H1 we can update W1 W2 and B1 here again
the same formula W1 is equal to W1 +
learning rate multiplied by Delta H1
whatever the error we have calculated
and its input input is how much now i1
over here uh W1 we will get .14 978 here
similarly we can calculate W2 and we can
calculate B1 in the case now once you
update these three things next step is
to calculate the error at H2 here now if
you want to calculate the error at H2
Delta H2 is equal to delta1 Multiplied
W6 here now this weight so with respect
to this age we have W6 here and with
respect to this O2 we have W8 here so
plus Delta O2 multiplied by W8 here
multiplied by out of H2 multiplied by 1
minus out of H2 again all values are
known to us if you put it you will get
minus
995 here once you calculate error at H2
we can update W3 W4 and B2 here W3 is
equal w3+ n learning rate multiplied by
Delta H2 multiplied by i1 here that is
again the input once you put all the
values you will get
24975 similarly we can calculate the
modified weight of W4 and B2 here now
once you calculate up upate all these
weights we need to replace the old
weights with respect to the new weights
this will complete the one Epoch so
after one EPO we have propagated the
input from input layer neuron to Output
layer neurons we have calculated the
error here now if the error is
acceptable you can stop otherwise what
we need to do is as said earlier we have
already updated the weights again we
need to propagate the input from input
layer neuron to Output layer neuron
again we have to calculate the error
again if the error is acceptable you can
stop here otherwise you need to uh
repeat the same uh process until the
error is minimized or you can say that
uh it has reduced to acceptable error in
this case so this is how the back
propagation algorithm works I hope the
concept of back propagation algorithm is
clear if you like the video do like and
share with your friends press the
Subscribe button for more videos press
the Bell icon for regular updates thank
you for watching
浏览更多相关视频
Backpropagation calculus | Chapter 4, Deep learning
How Neural Networks work in Machine Learning ? Understanding what is Neural Networks
How a machine learns
Gradient descent, how neural networks learn | Chapter 2, Deep learning
What is backpropagation really doing? | Chapter 3, Deep learning
11. Implement AND function using perceptron networks for bipolar inputs and targets by Mahesh Huddar
5.0 / 5 (0 votes)