I want to get an accurate modulo of x
and y
in a WebGL fragment shader. x
and y
are integers.
Graphing mod(x,y)
, we get the following:
The actual code used to generate the red-and-black rectangle is:
gl_FragColor = vec4(mod(
float(int(v_texCoord[0]*15.))/15.,
float(int(v_texCoord[1]*15.))/15.
), 0, 0, 1);
Where v_texCoord is a vec2 ranging from 0,0
at the top-left to 1,1
at the bottom-right. Precision is set to mediump for both float and int.
Reading the chart, we see that although mod(6,6)
is correctly 0
, mod(7,7)
is actually 7
! How do I fix this?
I tried to implement my own mod() function. However, it has the same errors, and produces the same graph.
int func_mod(int x, int y) {
return int(float(x)-float(y)*floor(float(x)/float(y)));
}
In Javascript, where I can debug it, the function works perfectly. I then tried an iterative approach, because I was worried I was going insane and I didn't trust the floating-point division anyway.
int iter_mod(int x, int y) {
x = int(abs(float(x))); y = int(abs(float(y)));
for(int i=0; i>-1; i++) {
if(x < y) break;
x = x - y;
}
return x;
}
This worked, but I can't graph it because it crashes linux with an error in ring 0 when I try. It works fine for the spritesheet calculations I need it for, but I really feel it's an incorrect solution.
(Update: It works perfectly on my phone. It's not my code in error now, it's just my problem…)
Here is a GLSL function that calculates MOD accurately with (float) parameters that should be integers:
/**
* Returns accurate MOD when arguments are approximate integers.
*/
float modI(float a,float b) {
float m=a-floor((a+0.5)/b)*b;
return floor(m+0.5);
}
Please note, if a<0 and b>0 then the return value will be >=0, unlike other languages' % operator.