Point Sprites for particle system

Javier Ramírez picture Javier Ramírez · Jul 1, 2013 · Viewed 19.8k times · Source

Are point sprites the best choice to build a particle system?

Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?

Answer

Christian Rau picture Christian Rau · Jul 1, 2013

Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).

That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:

...
glPointSize(whatever);              //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count);  //draw the points

vertex shader:

uniform mat4 mvp;

layout(location = 0) in vec4 position;

void main()
{
    gl_Position = mvp * position;
}

fragment shader:

uniform sampler2D tex;

layout(location = 0) out vec4 color;

void main()
{
    color = texture(tex, gl_PointCoord);
}

And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:

uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;

layout(location = 0) in vec4 position;

void main()
{
    vec4 eyePos = modelview * position;
    vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
    vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
    gl_PointSize = 0.25 * (projSize.x+projSize.y);
    gl_Position = projection * eyePos;
}

This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).


But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.

But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:

uniform mat4 modelview;

layout(location = 0) in vec4 position;

void main()
{
    gl_Position = modelview * position;
}

Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:

const vec2 corners[4] = { 
    vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };

layout(points) in;
layout(triangle_strip, max_vertices = 4) out;

uniform mat4 projection;
uniform float spriteSize;

out vec2 texCoord;

void main()
{
    for(int i=0; i<4; ++i)
    {
        vec4 eyePos = gl_in[0].gl_Position;           //start with point position
        eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
        gl_Position = projection * eyePos;             //complete transformation
        texCoord = corners[i];                         //use corner as texCoord
        EmitVertex();
    }
}

In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.


Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:

glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1);            //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count);  //draw #count quads

And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):

uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;

layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;

out vec2 texCoord;

void main()
{
    vec4 eyePos = modelview * position;            //transform to eye-space
    eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
    gl_Position = projection * eyePos;             //complete transformation
    texCoord = corner;
}

This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.