Training feedforward neural network for OCR

Marnix v. R. picture Marnix v. R. · Mar 13, 2012 · Viewed 10k times · Source

Currently I'm learning about neural networks and I'm trying to create an application that can be trained to recognize handwritten characters. For this problem I use a feed-forward neural network and it seems to work when I train it to recognize 1, 2 or 3 different characters. But when I try to make the network learn more than 3 characters it will stagnate at a error percentage around the 40 - 60%.

I tried with multiple layers and less/more neurons but I can't seem to get it right, now I'm wondering if a feedforward neural network is capable of recognizing that much information.

Some statistics:

Network type: Feed-forward neural network

Input neurons: 100 (a 10 * 10) grid is used to draw the characters

Output neurons: The amount of characters to regocnize

Does anyone know what's the possible flaw in my architecture is? Are there too much input neurons? Is the feedforward neural network not capable of character regocnition?

Answer

alfa picture alfa · Mar 13, 2012

For handwritten character recognition you need

  1. many training examples (maybe you should create distortions of your training set)
  2. softmax activation function in the output layer
  3. cross entropy error function
  4. training with stochastic gradient descent
  5. a bias in each layer

A good test problem is the handwritten digit data set MNIST. Here are papers that successfully applied neural networks on this data set:

Y. LeCun, L. Bottou, Y. Bengio and P. Haffner: Gradient-Based Learning Applied to Document Recognition, http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf

Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, Juergen Schmidhuber: Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition, http://arxiv.org/abs/1003.0358

I trained an MLP with 784-200-50-10 architecture and got >96% accuracy on the test set.