I noticed that my routine to convert between RGB888 24-bit to 16-bit RGB565 resulted in darkening of the colors progressively each time a conversion took place... The formula uses linear interpolation like so...
typedef struct _RGB24 RGB24;
struct _RGB24 {
BYTE B;
BYTE G;
BYTE R;
};
RGB24 *s; // source
WORD *d; // destination
WORD r;
WORD g;
WORD b;
// Code to convert from 24-bit to 16 bit
r = (WORD)((double)(s[x].r * 31) / 255.0);
g = (WORD)((double)(s[x].g * 63) / 255.0);
b = (WORD)((double)(s[x].b * 31) / 255.0);
d[x] = (r << REDSHIFT) | (g << GREENSHIFT) | (b << BLUESHIFT);
// Code to convert from 16-bit to 24-bit
s[x].r = (BYTE)((double)(((d[x] & REDMASK) >> REDSHIFT) * 255) / 31.0);
s[x].g = (BYTE)((double)(((d[x] & GREENMASK) >> GREENSHIFT) * 255) / 63.0);
s[x].b = (BYTE)((double)(((d[x] & BLUEMASK) >> BLUESHIFT) * 255) / 31.0);
The conversion from 16-bit to 24-bit is similar but with reverse interpolation... I don't understand how the values keep getting lower and lower each time a color is cycled through the equation if they are opposites... Originally there was no cast to double, but I figured if I made it a floating point divide it would not have the falloff... but it still does...
When you convert your double values to WORD, the values are being truncated. For example, (126 * 31)/ 255 = 15.439, which is truncated to 15. Because the values are truncated, they get progressively lower through each iteration. You need to introduce rounding (by adding 0.5 to the calculated values before converting them to integers)
Continuing the example, you then take 15 and convert back: (15 * 255)/31 = 123.387 which truncates to 123