I want to know if "if-statements" inside shaders (vertex / fragment / pixel...) are really slowing down the shader performance. For example:
Is it better to use this:
vec3 output;
output = input*enable + input2*(1-enable);
instead of using this:
vec3 output;
if(enable == 1)
{
output = input;
}
else
{
output = input2;
}
in another forum there was a talk about that (2013): http://answers.unity3d.com/questions/442688/shader-if-else-performance.html Here the guys are saying, that the If-statements are really bad for the performance of the shader.
Also here they are talking about how much is inside the if/else statements (2012): https://www.opengl.org/discussion_boards/showthread.php/177762-Performance-alternative-for-if-(-)
maybe the hardware or the shadercompiler are better now and they fix somehow this (maybe not existing) performance issue.
EDIT:
What is with this case, here lets say enable is a uniform variable and it is always set to 0:
if(enable == 1) //never happens
{
output = vec4(0,0,0,0);
}
else //always happens
{
output = calcPhong(normal, lightDir);
}
I think here we have a branch inside the shader which slows the shader down. Is that correct?
Does it make more sense to make 2 different shaders like one for the else and the other for the if part?
What is it about shaders that even potentially makes if
statements performance problems? It has to do with how shaders get executed and where GPUs get their massive computing performance from.
Separate shader invocations are usually executed in parallel, executing the same instructions at the same time. They're simply executing them on different sets of input values; they share uniforms, but they have different internal registers. One term for a group of shaders all executing the same sequence of operations is "wavefront".
The potential problem with any form of conditional branching is that it can screw all that up. It causes different invocations within the wavefront to have to execute different sequences of code. That is a very expensive process, whereby a new wavefront has to be created, data copied over to it, etc.
Unless... it doesn't.
For example, if the condition is one that is taken by every invocation in the wavefront, then no runtime divergence is needed. As such, the cost of the if
is just the cost of checking a condition.
So, let's say you have a conditional branch, and let's assume that all of the invocations in the wavefront will take the same branch. There are three possibilities for the nature of the expression in that condition:
uniform
values). But the value of the expression will not be known at compile-time. So the compiler can statically be certain that wavefronts will never be broken by this if
, but the compiler cannot know which branch will be taken.Different hardware can handle different branching types without divergence.
Also, even if a condition is taken by different wavefronts, the compiler could restructure the code to not require actual branching. You gave a fine example: output = input*enable + input2*(1-enable);
is functionally equivalent to the if
statement. A compiler could detect that an if
is being used to set a variable, and thus execute both sides. This is frequently done for cases of dynamic conditions where the bodies of the branches are small.
Pretty much all hardware can handle var = bool ? val1 : val2
without having to diverge. This was possible way back in 2002.
Since this is very hardware-dependent, it... depends on the hardware. There are however certain epochs of hardware that can be looked at:
There, it's kinda the wild west. NVIDIA's compiler for such hardware was notorious for detecting such conditions and actually recompiling your shader whenever you changed uniforms that affected such conditions.
In general, this era is where about 80% of the "never use if
statements" comes from. But even here, it's not necessarily true.
You can expect optimization of static branching. You can hope that statically uniform branching won't cause any additional slowdown (though the fact that NVIDIA thought recompilation would be faster than executing it makes it unlikely at least for their hardware). But dynamic branching is going to cost you something, even if all of the invocations take the same branch.
Compilers of this era do their best to optimize shaders so that simple conditions can be executed simply. For example, your output = input*enable + input2*(1-enable);
is something that a decent compiler could generate from your equivalent if
statement.
Hardware of this era is generally capable of handling statically uniform branches statements with little slowdown. For dynamic branching, you may or may not encounter slowdown.
Hardware of this era is pretty much guaranteed to be able to handle dynamically uniform conditions with little performance issues. Indeed, it doesn't even have to be dynamically uniform; so long as all of the invocations within the same wavefront take the same path, you won't see any significant performance loss.
Note that some hardware from the previous epoch probably could do this as well. But this is the one where it's almost certain to be true.
Welcome back to the wild west. Though unlike Pre-D3D10 desktop, this is mainly due to the huge variance of ES 2.0-caliber hardware. There's such a huge amount of stuff that can handle ES 2.0, and they all work very differently from each other.
Static branching will likely be optimized. But whether you get good performance from statically uniform branching is very hardware-dependent.
Hardware here is rather more mature and capable than ES 2.0. As such, you can expect statically uniform branches to execute reasonably well. And some hardware can probably handle dynamic branches the way modern desktop hardware does.