Bi-Cubic Interpolation Algorithm for Image Scaling

Channel72 picture Channel72 · Mar 2, 2013 · Viewed 27.9k times · Source

I'm trying to write a basic bicubic resize algorithm to resize a 24-bit RGB bitmap. I have a general understanding of the math involved, and I'm using this implementation from Google Code as a guide. I'm not using any external libraries here - I'm just experimenting with the algorithm itself. The bitmap is represented as a plain std::vector<unsigned char>:

inline unsigned char getpixel(const std::vector<unsigned char>& in, 
    std::size_t src_width, std::size_t src_height, unsigned x, unsigned y, int channel)
{
    if (x < src_width && y < src_height)
        return in[(x * 3 * src_width) + (3 * y) + channel];

    return 0;
}

std::vector<unsigned char> bicubicresize(const std::vector<unsigned char>& in, 
    std::size_t src_width, std::size_t src_height, std::size_t dest_width, std::size_t dest_height)
{
    std::vector<unsigned char> out(dest_width * dest_height * 3);

    const float tx = float(src_width) / dest_width;
    const float ty = float(src_height) / dest_height;
    const int channels = 3;
    const std::size_t row_stride = dest_width * channels;

    unsigned char C[5] = { 0 };

    for (int i = 0; i < dest_height; ++i)
    {
        for (int j = 0; j < dest_width; ++j)
        {
            const int x = int(tx * j);
            const int y = int(ty * i);
            const float dx = tx * j - x;
            const float dy = ty * i - y;

            for (int k = 0; k < 3; ++k)
            {
                for (int jj = 0; jj < 4; ++jj)
                {
                    const int z = y - 1 + jj;
                    unsigned char a0 = getpixel(in, src_width, src_height, z, x, k);
                    unsigned char d0 = getpixel(in, src_width, src_height, z, x - 1, k) - a0;
                    unsigned char d2 = getpixel(in, src_width, src_height, z, x + 1, k) - a0;
                    unsigned char d3 = getpixel(in, src_width, src_height, z, x + 2, k) - a0;
                    unsigned char a1 = -1.0 / 3 * d0 + d2 - 1.0 / 6 * d3;
                    unsigned char a2 = 1.0 / 2 * d0 + 1.0 / 2 * d2;
                    unsigned char a3 = -1.0 / 6 * d0 - 1.0 / 2 * d2 + 1.0 / 6 * d3;
                    C[jj] = a0 + a1 * dx + a2 * dx * dx + a3 * dx * dx * dx;

                    d0 = C[0] - C[1];
                    d2 = C[2] - C[1];
                    d3 = C[3] - C[1];
                    a0 = C[1];
                    a1 = -1.0 / 3 * d0 + d2 -1.0 / 6 * d3;
                    a2 = 1.0 / 2 * d0 + 1.0 / 2 * d2;
                    a3 = -1.0 / 6 * d0 - 1.0 / 2 * d2 + 1.0 / 6 * d3;
                    out[i * row_stride + j * channels + k] = a0 + a1 * dy + a2 * dy * dy + a3 * dy * dy * dy;
                }
            }
        }
    }

    return out;
}

Problem: When I use this algorithm to downscale an image, it works except the output image contains all black pixels on the right side for some reason, giving the appearance that it's been "cropped".

Example:

INPUT IMAGE:

enter image description here

OUTPUT IMAGE:

enter image description here

Question: Reviewing the algorithm, I can't see why this would happen. Does anyone see the flaw here?

Answer

V-X picture V-X · Mar 2, 2013

try not exchanging width and height.

   for (int i = 0; i < dest_width; ++i)
    {
        for (int j = 0; j < dest_height; ++j)