top of page

BACKPROPAGATION

Writer's picture: madrasresearchorgmadrasresearchorg
Backpropagation is a short form for "backward propagation of errors." It is a standard method of training artificial neural networks. This method helps to calculate the gradient of a loss function with respects to all the weights in the network
 

The backpropagation uses supervised learning. which means that the algorithm is provided with examples of the inputs and outputs that the network should compute, and then the error is calculated.

  • Backpropagation is fast, simple, and easy to program

  • It has no parameters to tune apart from the numbers of input

  • It is a flexible method as it does not require prior knowledge about the network

  • It is a standard method that generally works well

  • It does not need any special mention of the features of the function to be learned.


FIG:1

HOW BACKPROPAGATION WORKS

FIG:2

The above network contains the following:

  • Two inputs

  • Two hidden neurons

  • Two output neurons

  • Two biases

There are three steps involved in this:

  1. Forward propagation

  2. Backward propagation

  3. Putting all the values together and calculating the updated weight value


Step – 1: Forward Propagation

We will start by propagating forward.

FIG: 3

We will repeat this process for the output layer neurons, using the output from the hidden layer neurons as inputs.


FIG:4

Now, let’s see what is the value of the error:


FIG:5

Step – 2: Backward Propagation

Now, we will propagate backward. This way we will try to reduce the error by changing the values of weights and biases. Consider W5, we will calculate the rate of change of error w.r.t change in weight W5.


FIG:6

Since we are propagating backward, the first thing we need to do is, calculate the change in total errors w.r.t the output O1 and O2.


FIG:7

Now, we will propagate further backward and calculate the change in output O1 w.r.t to its total net input.


FIG:8

Let’s see now how much does the total net input of O1 changes w.r.t W5.

FIG:9

Step – 3: Putting all the values together and calculating the updated weight value

Now, let’s put all the values together:


FIG:10

Let’s calculate the updated value of W5:

FIG:11


  • Similarly, we can calculate the other weight values as well.

  • After that, we will again propagate forward and calculate the output. Again, we will calculate the error.

  • If the error is minimum we will stop right there, else we will again propagate backward and update the weight values.

  • This process will keep on repeating until the error becomes minimum.


27 views0 comments

Recent Posts

See All

コメント


Madras Scientific  Research Foundation

About US

 

Contact

 

Blog

 

Internship

 

Join us 

Know How In Action 

bottom of page