How does a back-propagation training algorithm work?

unleashed picture unleashed · Jan 26, 2012 · Viewed 17.7k times · Source

I've been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect.

How does back-propagation work? How does it learn from a training dataset provided? I will have to code this, but until then I need to gain a stronger understanding of it.

Answer

Sufian Latif picture Sufian Latif · Jan 26, 2012

Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.

In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must fix their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their weights and other parameters using the weighted sum of errors from the layer ahead.

A step-by-step demo of feed-forward and back-propagation steps can be found here.


Edit

If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.