How to Improve OCR on image with text in different colors and fonts?

RaedMarji picture RaedMarji · Aug 11, 2018 · Viewed 8.1k times · Source

I'm using the Google Vision API to extract the text from some pictures, however, I have been trying to improve the accuracy (confidence) of the results with no luck.

every time I change the image from the original I lose accuracy in detecting some characters.

I have isolated the issue to have multiple colors for different words with can be seen that words in red for example have incorrect results more often than the other words.

Example:

some variations on the image from gray scale or b&w

Original Image

enter image description here

enter image description here

enter image description here

enter image description here

What ideas can I try to make this work better, specifically changing the colors of text to a uniform color or just black on a white background since most algorithms expect that?

some ideas I already tried, also some thresholding.

dimg = ImageOps.grayscale(im)
cimg = ImageOps.invert(dimg)

contrast = ImageEnhance.Contrast(dimg)
eimg = contrast.enhance(1)

sharp = ImageEnhance.Sharpness(dimg)
eimg = sharp.enhance(1)

Answer

LSerni picture LSerni · Aug 20, 2018

I can only offer a butcher's solution, potentially a nightmare to maintain.

In my own, very limited scenario, it worked like a charm where several other OCR engines either failed or had unacceptable running times.

My prerequisites:

  • I knew exactly in which area of the screen the text was going to go.
  • I knew exactly which fonts and colors were going to be used.
  • the text was semitransparent, so the underlying image interfered, and it was a variable image to boot.
  • I could not detect reliably text changes to average frames and reduce the interference.

What I did: - I measured the kerning width of each character. I only had A-Za-z0-9 and a bunch of punctuation characters to worry about. - The program would start at position (0,0), measure the average color to determine the color, then access the whole set of bitmaps generated from characters in all available fonts in that color. Then it would determine which rectangle was closest to the corresponding rectangle on the screen, and advance to the next one.

(Months later, requiring more performances, I added a varying probability matrix to test first the most likely characters).

In the end, the resulting C program was able to read the subtitles out of the video stream with 100% accuracy in real time.