The problem that I am running with is to extract the text out of an image and for this I have used Tesseract v3.02. The sample images from which I have to extract text are related to meter readings. Some of them are with solid sheet background and some of them have LED display. I have trained the dataset for solid sheet background and the results are some how effective.
The major problem I have now is the text images with LED/LCD background which are not recognized by Tesseract and due to this the training set isn't generated.
Can anyone guide me to the right direction on how to use Tesseract with the Seven Segment Display(LCD/LED background) or is there any other alternative that I can use instead of Tesseract.
This seems like an image preprocessing task. Tesseract would really prefer its images to all be white-on-black text in bitmap format. If you give it something that isn't that, it will do its best to convert it to that format. It is not very smart about how to do this. Using some image manipulation tool (I happen to like imagemagick), you need to make the images more to tesseract's satisfaction. An easy first pass might be to do a small-radius gaussian blur, threshold at a pretty low value (you're trying to keep only black, so 15% seems right), and then invert the image.
The hard part then becomes knowing which preprocessing task to do. If you have metadata telling you what sort of display you're dealing with, great. If not, I suspect you could look at image color histograms to at least figure out whether your text is white-on-black or black-on-color. If these are the only scenarios, white-on-black is always solid background, and black-on-color is always seven-segment-display, then you're done. If not, you'll have to be clever. Good luck, and please let us know what you come up with.