Backpropagation Solved Example - 4 | Backpropagation Algorithm in Neural Networks by Mahesh Huddar

Mahesh Huddar
24 Mar 202411:24

Summary

TLDRThis video tutorial explains the backpropagation algorithm using a neural network example with three layers. It details the process of input propagation from the input to the output layer, calculates the error, and updates the weights and biases accordingly. The video also covers the logistic activation function, error calculation, and weight update formulas, guiding viewers through each step to understand how backpropagation minimizes error in neural networks.

Takeaways

  • ๐ŸŒ The video discusses the backpropagation algorithm using a neural network with an input layer, a hidden layer, and an output layer.
  • ๐Ÿ”ข The neural network example has two neurons in each layer, with given weights and biases for the layers.
  • ๐Ÿ“ˆ The input values are 0.5 and 0.10, and the goal is to propagate these inputs through the network to calculate the output.
  • ๐Ÿงฎ The net input for each neuron is calculated by summing the product of the weights and inputs, plus the bias.
  • ๐Ÿ“‰ The logistic activation function is used to transform the net input into the output of the neurons.
  • ๐Ÿ”„ The process is repeated for each neuron in the hidden and output layers to calculate their respective outputs.
  • ๐Ÿ“Š The error is calculated based on the difference between the target output and the calculated output using a specific formula.
  • ๐Ÿ”ง The weights and biases are updated based on the error, using the delta (change in error) and the learning rate.
  • ๐Ÿ”„ The process of calculating the output, error, and updating weights is repeated for each epoch until the error is minimized or reaches an acceptable level.
  • ๐Ÿ“š The video aims to clarify the concept of the backpropagation algorithm for neural networks.

Q & A

  • What is the purpose of the backpropagation algorithm?

    -The backpropagation algorithm is used to train neural networks by adjusting the weights and biases to minimize the error between the predicted output and the actual target values.

  • How many layers does the neural network in the example have?

    -The neural network in the example has three layers: an input layer, a hidden layer, and an output layer.

  • How many neurons are there in each layer of the given neural network?

    -There are two neurons in the input layer, two neurons in the hidden layer, and two neurons in the output layer.

  • What are the roles of W1, W2, W3, and W4 in the neural network?

    -W1, W2, W3, and W4 are weights associated with the neurons in the hidden layer.

  • What activation function is used in this example?

    -The logistic activation function is used in this example, which is defined as f(x) = 1 / (1 + e^(-x)).

  • How is the net input for a neuron calculated?

    -The net input for a neuron is calculated as the sum of the products of the neuron's weights and the corresponding inputs plus the bias.

  • What is the formula used to calculate the error in the output layer?

    -The error in the output layer is calculated using the formula: error = 1/2 * (Target - Calculated Output)^2.

  • How is the weight update performed in the backpropagation algorithm?

    -The weight update is performed by adjusting the weights based on the error, using the formula: new weight = old weight + learning rate * error * output of the previous layer.

  • What is the significance of the biases B1, B2, B3, and B4?

    -B1 and B2 are biases for the hidden layer neurons, while B3 and B4 are biases for the output layer neurons. They are used to adjust the net input and help in the activation of the neurons.

  • How does the backpropagation algorithm handle the error?

    -The backpropagation algorithm calculates the error at the output layer, then propagates this error backward to update the weights and biases of the network.

  • What is the role of the learning rate in the backpropagation algorithm?

    -The learning rate is a hyperparameter that determines the step size at each iteration while moving toward a minimum of a loss function. It is used to update the weights during the training process.

Outlines

00:00

๐Ÿง  Introduction to Back Propagation Algorithm

The paragraph introduces the back propagation algorithm using a neural network example with three layers: input, hidden, and output. Each layer has two neurons. The input values are provided, and the weights (W1-W8) and biases (B1-B4) are mentioned. The process of propagating the input from the input layer to the output layer is explained. The focus is on calculating the net input and applying the logistic activation function to determine the output of the hidden layer neurons (H1 and H2). The logistic function is defined, and the calculations for the output of H1 and H2 are detailed. The process involves summing the products of weights and inputs, adding biases, and applying the logistic function to find the neuron outputs.

05:00

๐Ÿ”„ Updating Weights and Calculating Errors

This section describes the process of updating the weights and biases for the output layer neurons after calculating the output at O1 and O2 using the logistic activation function. The error calculation between the target and the calculated output is explained, using a formula involving the square of the difference between the target and output, divided by 2. The error with respect to each output neuron is calculated, and the weights (W5-W8) and biases (B3-B4) are updated using the learning rate and the error values. The formula for updating the weights involves multiplying the learning rate by the error, the output of the previous layer neuron, and adjusting the bias similarly. The process is repeated for both output neurons.

10:01

๐Ÿ”„ Continuing Weight Updates and Error Minimization

The final paragraph discusses the continuation of the back propagation process. It covers updating the weights and biases for the hidden layer neurons (H1 and H2) using the errors calculated at the output layer. The error for each hidden neuron is determined by the contribution of the errors from the output layer, weighted by the connections to the hidden neurons. The weights (W1-W4) and biases (B1-B2) are updated using a similar method to the output layer, involving the learning rate and the calculated errors. The paragraph concludes by emphasizing the iterative nature of back propagation, where the process of propagating inputs, calculating errors, and updating weights is repeated until the error is minimized or reaches an acceptable level. The video ends with a call to action for viewers to like, share, and subscribe for more content.

Mindmap

Keywords

๐Ÿ’กBack Propagation

Back Propagation is a supervised learning algorithm used to train artificial neural networks. It involves the process of adjusting the weights of the network by calculating the gradient of the loss function with respect to the weights. In the video, back propagation is the central theme, as the host explains how to propagate inputs through the network and then back to adjust weights based on the error.

๐Ÿ’กNeural Network

A Neural Network is a series of algorithms modeled loosely after the human brain. It is composed of layers of interconnected nodes, or neurons. In the script, the neural network has three layers: an input layer, a hidden layer, and an output layer, each with neurons that process the data.

๐Ÿ’กHidden Layer

The Hidden Layer in a neural network is situated between the input and output layers. It contains neurons that perform transformations on the input data. In the video script, there are two neurons in the hidden layer, which process the inputs and pass them to the output layer.

๐Ÿ’กOutput Layer

The Output Layer is the final layer in a neural network that produces the predictions or results. In the context of the video, the output layer has two neurons that generate the final output of the neural network after processing the inputs through the hidden layer.

๐Ÿ’กWeights

Weights in a neural network are the numeric values that are used to adjust the input signals between neurons. The video script describes W1, W2, W3, W4 as weights for the hidden layer neurons and W5, W6, W7, W8 as weights for the output layer neurons, which are updated during back propagation.

๐Ÿ’กBias

Bias is a parameter that is added to the result of the weighted sum of inputs. It helps to adjust the activation output of a neuron. In the script, B1 and B2 are biases for the hidden layer neurons, while B3 and B4 are biases for the output layer neurons.

๐Ÿ’กLogistic Activation Function

The Logistic Activation Function, also known as the sigmoid function, is an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1. In the video, this function is used to introduce non-linearity into the model, allowing the neural network to learn more complex patterns.

๐Ÿ’กNet Input

Net Input refers to the weighted sum of the inputs to a neuron before the activation function is applied. In the script, the net input is calculated as the sum of the product of weights and inputs plus the bias for each neuron in the hidden and output layers.

๐Ÿ’กError

Error in the context of neural networks refers to the difference between the predicted output and the actual output. The script describes calculating the error using the equation error = 1/2 * (Target - Output)^2, which is used to update the weights during back propagation.

๐Ÿ’กLearning Rate

The Learning Rate is a hyperparameter that controls the step size during the update of the weights. In the script, the learning rate is set to 0.5 and is used to adjust the weights of the network during the back propagation process.

๐Ÿ’กEpoch

An Epoch refers to one complete pass through the entire training dataset. In the video script, after propagating the input and updating the weights, the process is repeated for multiple epochs until the error is minimized or reaches an acceptable level.

Highlights

Introduction to the backpropagation algorithm using a specific example.

Description of the neural network structure with three layers: input, hidden, and output.

Explanation of the number of neurons in each layer.

Introduction to the weights and biases of the neural network.

Propagation of input from the input layer to the output layer.

Calculation of the net input for the hidden layer neurons.

Application of the logistic activation function to calculate the output of hidden layer neurons.

Formula for the logistic activation function.

Calculation of the output at H1 using the logistic function.

Calculation of the output at H2 using the logistic function.

Propagation to calculate the output at O1 and O2 using the logistic function.

Calculation of the error based on the target and the calculated output.

Update of the weights with respect to the output layer neurons.

Calculation of the error with respect to O1 and O2 for weight update.

Formula for updating the weights during backpropagation.

Update of the weights W5, W6, B3, W7, W8, and B4.

Calculation of the error at H1 and H2 for updating the hidden layer weights.

Update of the weights W1, W2, B1, W3, W4, and B2.

Completion of one epoch and the process of repeating until the error is minimized.

Summary of the backpropagation algorithm and its practical applications.

Transcripts

play00:00

welcome back in this video I will

play00:02

discuss back propagation algorithm with

play00:05

the help of Sol example this is the Sol

play00:08

example number four link for other

play00:10

examples is given in the description

play00:12

below in this case we have been given a

play00:14

neural net with three layers input layer

play00:17

hidden layer and output layer there are

play00:20

two neurons in input layer two neurons

play00:23

in the hidden layer and two neurons in

play00:24

the output layer 05 and 010 is the input

play00:30

we have been given the weights W1 W2 W3

play00:33

W4 are the weights with respective to

play00:35

Hidden layer neurons W5 W6 W7 and W8 are

play00:39

the weights with respective to Output

play00:41

layer neurons B1 and B2 are the bias

play00:44

with respective to Hidden layer neurons

play00:47

B3 B4 are the bias with respective to

play00:49

Output layer neurons so in this case we

play00:52

need to propagate this input from input

play00:53

layer neuron to Output layer neuron and

play00:55

then we need to calculate the error

play00:57

based on the error we need to update the

play00:59

weight here now the question comes in

play01:02

front of us like how to propagate the

play01:04

input from input layer neuron to Output

play01:06

layer uh neuron in this case so first

play01:08

what we do is we will calculate the

play01:10

output at Hidden layer neurons to

play01:12

calculate the output at Hidden layer

play01:14

neurons first we need to calculate the

play01:16

net input on the top of that net input

play01:18

we need to apply the logistic activation

play01:20

function so that we will calculate the

play01:22

output of hidden layer neurons the

play01:24

logistic activation function looks

play01:26

something like this f of x is equal to 1

play01:28

/ 1 + e to 2 - x where X is the net

play01:32

input here now output at H1 is equal to

play01:35

F of net input net input how to

play01:37

calculate it is the sum of

play01:40

multiplication of weight and input here

play01:43

so that is nothing but at H1 uh W1 i1 +

play01:48

W2 I2 plus B1 into 1 here so that is

play01:52

what the net input on the top of this

play01:54

net input what we are applying we are

play01:55

applying the logistic activation

play01:57

function here now once you put the

play01:59

values here

play02:00

uh you will get F of

play02:02

3775 uh that is nothing but 1 divid 1 +

play02:06

eus 3775 and once you solve it you will

play02:10

get output at H1 is equal to

play02:13

59327 here similarly we need to

play02:16

calculate the output at H2 here again we

play02:18

need to calculate the net input that is

play02:20

nothing but you can see here uh W3 i1

play02:23

plus W4 is coming towards H2 here so W4

play02:27

I2 plus B2 into 1 here here and then

play02:31

once you solve it you will get

play02:33

3925 F of 3925 is equal to 1 / 1 + e to

play02:38

-. 3925 again once you solve this

play02:42

equation you will get 59 689 here so

play02:45

this is the output at H2 here now once

play02:48

you calculate the output at uh H1 and H2

play02:52

now we need to calculate the output at

play02:53

o1 and O2 here again the same uh

play02:57

activation function we are going to use

play02:58

that is the logistic activation ation

play03:00

function and the concept is same first

play03:02

we need to calculate the net input once

play03:04

you calculate the net input we need to

play03:06

apply the activation function again what

play03:08

is the net input in this case the net

play03:10

input is you can see here towards o1 W5

play03:13

is coming and W6 is coming so W5

play03:15

multiplied by the output of H1 that is

play03:17

nothing but out of H1 here W6 multiplied

play03:21

by output of H2 here that is nothing but

play03:23

out of H2 plus B3 is coming here so B3

play03:27

multiplied by 1 again once you solve it

play03:29

you will get F of 1.10 591 uh that is

play03:33

nothing but output at1 is equal to

play03:37

75137 here similarly we need to

play03:40

calculate the output at O2 here uh if

play03:42

you notice towards O2 W uh 8 is coming

play03:47

and W7 is coming so W7 multiplied by

play03:50

output at H1 that's the first one W8

play03:52

multiplied by output at H2 uh plus B4 is

play03:57

coming towards O2 so B4 mli by 1 here

play04:01

once you solve it you will get F of 1.22

play04:04

492 uh that is nothing but

play04:07

77293 here so output at o1 and O2 we

play04:10

have calculated now once you calculate

play04:12

the output at o1 and O2 we have already

play04:15

been given the target here based on the

play04:17

Target and the calculated output we need

play04:19

to calculate the error here to calculate

play04:22

the error we use this equation error is

play04:24

equalent to half of squared the

play04:27

difference between Target and calculated

play04:29

out output so 1 by 2 multiplied by

play04:32

Target is how much at this o1 T1 what is

play04:35

the calculated output out of o1 bracket

play04:38

square + 1 by 2 T2 is the target at O2

play04:42

minus calculated output is out of o2

play04:44

bracket Square again we have already

play04:46

calculated these two values T1 and T2

play04:48

are already given to us once you put all

play04:50

these values we will get e is equal to

play04:53

29

play04:55

8371 this is the error at output layer

play04:57

neuron in this case now once you

play05:00

calculate the error at output layer

play05:01

neurons the next step is to update the

play05:03

weights with respect to Output layer

play05:05

neurons that is nothing but W5 W6 W7 W8

play05:08

B3 and B4 here now how to update this

play05:11

particular weights is uh first we need

play05:13

to calculate the error uh with

play05:16

respective to o1 and with respective to

play05:18

O2 here because overall error we have

play05:20

calculated are the overall error with

play05:23

respective to o1 and O2 now we need to

play05:25

calculate the contribution of o1 and

play05:27

contribution of o2 here based on that we

play05:29

can update these particular weights so

play05:31

once you calculate the contribution of

play05:33

o1 you can update W5 W6 and then B3 and

play05:38

once you calculate the contribution of

play05:41

error at oo you can update W7 W8 and

play05:45

this B4 we can update here so that's the

play05:48

reason first we calculate the

play05:49

contribution of error with respect to o1

play05:53

that is delta1 is equal to T1 minus

play05:57

out1 out1 that is nothing but Target at

play06:01

uh o1 minus the output at o1 calculated

play06:04

output multiplied by the calculated

play06:06

output multiplied by 1 minus calculated

play06:09

output here so this is a standard

play06:11

formula in B pration algorithm now once

play06:14

you put the values we have already

play06:15

calculated out o1 E1 is given to us you

play06:18

will get the delta1 that is nothing but

play06:21

-0 1385 Z here so this is the error with

play06:26

respect to O here now once you calculate

play06:28

error with respect to1 next step is to

play06:31

update the associated weights as said

play06:33

earlier W5 W6 and B3 can be updated here

play06:37

W5 is equal to W5 that's a old weight

play06:40

plus the learning rate multiplied by

play06:43

delta1 multiplied by out H1 so out H1 is

play06:46

known to us uh delta1 is known to us W5

play06:50

is known to us the learning rate given

play06:52

in this problem definition is 0.5 so all

play06:54

those values are known to us W5 is

play06:57

equalent to what 3 5892 that is the

play07:01

modified weight here so previously it

play07:03

was 40 now it is 35892 here similarly W6

play07:08

W6 is equal to W6 plus learning rate

play07:11

multiplied by delta1 multiplied by out

play07:13

H2 here once you put all these values

play07:15

you will get 4867 here similarly we need

play07:19

to update B3 here B3 is equal to B3 Plus

play07:23

in the learning rate multiplied by

play07:25

delta1 multiplied by this input that is

play07:27

nothing but one here you will get

play07:31

53075 previously it was

play07:33

60 uh similarly we have to calculate the

play07:36

Delta O2 here Delta O2 is nothing but T2

play07:39

minus out2 out2 * 1 - out2 we need to

play07:44

put all the values you will get delta2

play07:46

here once you calculate the Delta O2 you

play07:49

can update W7 W8 and B4 W7 is equal to

play07:53

W7 plus learning rate delta2 out H1 here

play07:58

because this one is

play08:00

being updated with respect to H1 so we

play08:01

need to take out H1 here so once you put

play08:04

all the values you will get

play08:07

51130 similarly W8 is calculated and B4

play08:10

is calculated here now we have updated

play08:13

all the weights with respect to Output

play08:14

layer neuron the next step is to update

play08:16

the weights with respect to Hidden layer

play08:18

neurons for that reason we need to

play08:20

calculate the error at H1 and error at

play08:22

H2 here to calculate error at H1 and H2

play08:26

we can use this formula that is Delta H1

play08:29

that's the error at H1 is equal to error

play08:32

at o1 that is the error whatever we have

play08:34

calculated at o1 multiplied by W5 that

play08:38

is this one this is the uh

play08:41

W5 and uh plus error at O2 multiplied by

play08:45

W7 so these are the two things we need

play08:47

to multiply with respect to these errors

play08:49

here multiplied by out H1 whatever the

play08:52

out output of H1 is there multiplied by

play08:55

1 minus out of H1 over here so once you

play08:59

put all the these values you will get

play09:00

minus

play09:02

877 uh once you calculate the error at

play09:05

H1 we can update W1 W2 and B1 here again

play09:09

the same formula W1 is equal to W1 +

play09:12

learning rate multiplied by Delta H1

play09:15

whatever the error we have calculated

play09:17

and its input input is how much now i1

play09:20

over here uh W1 we will get .14 978 here

play09:25

similarly we can calculate W2 and we can

play09:28

calculate B1 in the case now once you

play09:30

update these three things next step is

play09:32

to calculate the error at H2 here now if

play09:35

you want to calculate the error at H2

play09:37

Delta H2 is equal to delta1 Multiplied

play09:40

W6 here now this weight so with respect

play09:42

to this age we have W6 here and with

play09:45

respect to this O2 we have W8 here so

play09:50

plus Delta O2 multiplied by W8 here

play09:54

multiplied by out of H2 multiplied by 1

play09:57

minus out of H2 again all values are

play09:59

known to us if you put it you will get

play10:01

minus

play10:02

995 here once you calculate error at H2

play10:06

we can update W3 W4 and B2 here W3 is

play10:10

equal w3+ n learning rate multiplied by

play10:14

Delta H2 multiplied by i1 here that is

play10:16

again the input once you put all the

play10:18

values you will get

play10:20

24975 similarly we can calculate the

play10:23

modified weight of W4 and B2 here now

play10:28

once you calculate up upate all these

play10:29

weights we need to replace the old

play10:32

weights with respect to the new weights

play10:35

this will complete the one Epoch so

play10:37

after one EPO we have propagated the

play10:40

input from input layer neuron to Output

play10:41

layer neurons we have calculated the

play10:43

error here now if the error is

play10:45

acceptable you can stop otherwise what

play10:47

we need to do is as said earlier we have

play10:49

already updated the weights again we

play10:51

need to propagate the input from input

play10:52

layer neuron to Output layer neuron

play10:54

again we have to calculate the error

play10:56

again if the error is acceptable you can

play10:57

stop here otherwise you need to uh

play11:00

repeat the same uh process until the

play11:02

error is minimized or you can say that

play11:05

uh it has reduced to acceptable error in

play11:08

this case so this is how the back

play11:10

propagation algorithm works I hope the

play11:12

concept of back propagation algorithm is

play11:14

clear if you like the video do like and

play11:16

share with your friends press the

play11:18

Subscribe button for more videos press

play11:20

the Bell icon for regular updates thank

play11:22

you for watching

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
Machine LearningBackpropagationNeural NetworksAlgorithm TutorialData ScienceArtificial IntelligenceDeep LearningMathematical ModelCoding ExampleEducational Video