Is there a library for image warping / image morphing for python with controlled points?

smln picture smln · Feb 21, 2011 · Viewed 19.6k times · Source

You'd take images and mark specific points (for example, mark the region around the eyes, nose, mouth etc of people) and then transform them into the points marked into another image. Something like:

transform(original_image, marked_points_in_the_original, marked_points_in_the_reference)

I can't seem to find an algorithm describing it, nor can I find any libraries with it. I'm willing to do it myself too, as long as I can find good/easy to follow material on it. I know it's possible though since I've seen some incomplete (don't really explain how to do it) .pdfs on google with it.

Here's an example of the marked points and the transformation, since you asked for clarification. Though this one isn't using 2 people as I said earlier.


Edit: I managed to get the im.transform method working, but the argument is a list of ((box_x, box_y, box_width, box_height), (x0, y0, x1, y1, x2, y2, x3, y3)), with the first point being NW, the second SW, the third NE and the fourth SE. (0, 0) is the leftmost upper part of the screen as far as I could tell. If I did everything right, then this method doesn't really do what I need.

Answer

Warbean picture Warbean · May 20, 2016

Sample code given by Blender doesn't work for me. Also, the PIL documentation for im.transform is ambiguous. So I dig into the PIL source code and finally figure out how to use the interface. Here's my complete usage:

import numpy as np
from PIL import Image

def quad_as_rect(quad):
    if quad[0] != quad[2]: return False
    if quad[1] != quad[7]: return False
    if quad[4] != quad[6]: return False
    if quad[3] != quad[5]: return False
    return True

def quad_to_rect(quad):
    assert(len(quad) == 8)
    assert(quad_as_rect(quad))
    return (quad[0], quad[1], quad[4], quad[3])

def rect_to_quad(rect):
    assert(len(rect) == 4)
    return (rect[0], rect[1], rect[0], rect[3], rect[2], rect[3], rect[2], rect[1])

def shape_to_rect(shape):
    assert(len(shape) == 2)
    return (0, 0, shape[0], shape[1])

def griddify(rect, w_div, h_div):
    w = rect[2] - rect[0]
    h = rect[3] - rect[1]
    x_step = w / float(w_div)
    y_step = h / float(h_div)
    y = rect[1]
    grid_vertex_matrix = []
    for _ in range(h_div + 1):
        grid_vertex_matrix.append([])
        x = rect[0]
        for _ in range(w_div + 1):
            grid_vertex_matrix[-1].append([int(x), int(y)])
            x += x_step
        y += y_step
    grid = np.array(grid_vertex_matrix)
    return grid

def distort_grid(org_grid, max_shift):
    new_grid = np.copy(org_grid)
    x_min = np.min(new_grid[:, :, 0])
    y_min = np.min(new_grid[:, :, 1])
    x_max = np.max(new_grid[:, :, 0])
    y_max = np.max(new_grid[:, :, 1])
    new_grid += np.random.randint(- max_shift, max_shift + 1, new_grid.shape)
    new_grid[:, :, 0] = np.maximum(x_min, new_grid[:, :, 0])
    new_grid[:, :, 1] = np.maximum(y_min, new_grid[:, :, 1])
    new_grid[:, :, 0] = np.minimum(x_max, new_grid[:, :, 0])
    new_grid[:, :, 1] = np.minimum(y_max, new_grid[:, :, 1])
    return new_grid

def grid_to_mesh(src_grid, dst_grid):
    assert(src_grid.shape == dst_grid.shape)
    mesh = []
    for i in range(src_grid.shape[0] - 1):
        for j in range(src_grid.shape[1] - 1):
            src_quad = [src_grid[i    , j    , 0], src_grid[i    , j    , 1],
                        src_grid[i + 1, j    , 0], src_grid[i + 1, j    , 1],
                        src_grid[i + 1, j + 1, 0], src_grid[i + 1, j + 1, 1],
                        src_grid[i    , j + 1, 0], src_grid[i    , j + 1, 1]]
            dst_quad = [dst_grid[i    , j    , 0], dst_grid[i    , j    , 1],
                        dst_grid[i + 1, j    , 0], dst_grid[i + 1, j    , 1],
                        dst_grid[i + 1, j + 1, 0], dst_grid[i + 1, j + 1, 1],
                        dst_grid[i    , j + 1, 0], dst_grid[i    , j + 1, 1]]
            dst_rect = quad_to_rect(dst_quad)
            mesh.append([dst_rect, src_quad])
    return mesh

im = Image.open('./old_driver/data/train/c0/img_292.jpg')
dst_grid = griddify(shape_to_rect(im.size), 4, 4)
src_grid = distort_grid(dst_grid, 50)
mesh = grid_to_mesh(src_grid, dst_grid)
im = im.transform(im.size, Image.MESH, mesh)
im.show()

Before: enter image description here After: enter image description here

I suggest executing above code in iPython then print out mesh to understand what kind of input is needed for im.transform. For me the output is:

In [1]: mesh
Out[1]:
[[(0, 0, 160, 120), [0, 29, 29, 102, 186, 120, 146, 0]],
 [(160, 0, 320, 120), [146, 0, 186, 120, 327, 127, 298, 48]],
 [(320, 0, 480, 120), [298, 48, 327, 127, 463, 77, 492, 26]],
 [(480, 0, 640, 120), [492, 26, 463, 77, 640, 80, 605, 0]],
 [(0, 120, 160, 240), [29, 102, 9, 241, 162, 245, 186, 120]],
 [(160, 120, 320, 240), [186, 120, 162, 245, 339, 214, 327, 127]],
 [(320, 120, 480, 240), [327, 127, 339, 214, 513, 284, 463, 77]],
 [(480, 120, 640, 240), [463, 77, 513, 284, 607, 194, 640, 80]],
 [(0, 240, 160, 360), [9, 241, 27, 364, 202, 365, 162, 245]],
 [(160, 240, 320, 360), [162, 245, 202, 365, 363, 315, 339, 214]],
 [(320, 240, 480, 360), [339, 214, 363, 315, 453, 373, 513, 284]],
 [(480, 240, 640, 360), [513, 284, 453, 373, 640, 319, 607, 194]],
 [(0, 360, 160, 480), [27, 364, 33, 478, 133, 480, 202, 365]],
 [(160, 360, 320, 480), [202, 365, 133, 480, 275, 480, 363, 315]],
 [(320, 360, 480, 480), [363, 315, 275, 480, 434, 469, 453, 373]],
 [(480, 360, 640, 480), [453, 373, 434, 469, 640, 462, 640, 319]]]