Intensity normalization of image using Python+PIL - Speed issues

Happy picture Happy · Sep 14, 2011 · Viewed 26.8k times · Source

I'm working on a little problem in my sparetime involving analysis of some images obtained through a microscope. It is a wafer with some stuff here and there, and ultimately I want to make a program to detect when certain materials show up.

Anyways, first step is to normalize the intensity across the image, since the lens does not give uniform lightning. Currently I use an image, with no stuff on, only the substrate, as a background, or reference, image. I find the maximum of the three (intensity) values for RGB.

from PIL import Image
from PIL import ImageDraw

rmax = 0;gmax = 0;bmax = 0;rmin = 300;gmin = 300;bmin = 300

im_old = Image.open("test_image.png")
im_back = Image.open("background.png")

maxx = im_old.size[0] #Import the size of the image
maxy = im_old.size[1]
im_new = Image.new("RGB", (maxx,maxy))


pixback = im_back.load()
for x in range(maxx):
    for y in range(maxy):
        if pixback[x,y][0] > rmax:
            rmax = pixback[x,y][0]
        if pixback[x,y][1] > gmax:
            gmax = pixback[x,y][1]
        if pixback[x,y][2] > bmax:
            bmax = pixback[x,y][2]


pixnew = im_new.load()
pixold = im_old.load()
for x in range(maxx):
    for y in range(maxy):
        r = float(pixold[x,y][0]) / ( float(pixback[x,y][0])*rmax )
        g = float(pixold[x,y][1]) / ( float(pixback[x,y][1])*gmax )
        b = float(pixold[x,y][2]) / ( float(pixback[x,y][2])*bmax )
        pixnew[x,y] = (r,g,b)

The first part of the code determines the maximum intensity of the RED, GREEN and BLUE channels, pixel by pixel, of the background image, but needs only be done once.

The second part takes the "real" image (with stuff on it), and normalizes the RED, GREEN and BLUE channels, pixel by pixel, according to the background. This takes some time, 5-10 seconds for an 1280x960 image, which is way too slow if I need to do this to several images.

What can I do to improve the speed? I thought of moving all the images to numpy arrays, but I can't seem to find a fast way to do that for RGB images. I'd rather not move away from python, since my C++ is quite low-level, and getting a working FORTRAN code would probably take longer than I could ever save in terms of speed :P

Answer

unutbu picture unutbu · Sep 14, 2011
import numpy as np
from PIL import Image

def normalize(arr):
    """
    Linear normalization
    http://en.wikipedia.org/wiki/Normalization_%28image_processing%29
    """
    arr = arr.astype('float')
    # Do not touch the alpha channel
    for i in range(3):
        minval = arr[...,i].min()
        maxval = arr[...,i].max()
        if minval != maxval:
            arr[...,i] -= minval
            arr[...,i] *= (255.0/(maxval-minval))
    return arr

def demo_normalize():
    img = Image.open(FILENAME).convert('RGBA')
    arr = np.array(img)
    new_img = Image.fromarray(normalize(arr).astype('uint8'),'RGBA')
    new_img.save('/tmp/normalized.png')