Text/font rendering in OpenGLES 2 (iOS - CoreText?) - options and best practice?

Adam picture Adam · Sep 1, 2013 · Viewed 7.4k times · Source

There are many questions on OpenGL font rendering, many of them are satisfied by texture atlases (fast, but wrong), or string-textures (fixed-text only).

However, those approaches are poor and appear to be years out of date (what about using shaders to do this better/faster?). For OpenGL 4.1 there's this excellent question looking at "what should you use today?":

What is state-of-the-art for text rendering in OpenGL as of version 4.1?

So, what should we be using on iOS GL ES 2 today?

I'm disappointed that there appears to be no open-source (or even commercial solution). I know a lot of teams suck it down and spend weeks of dev time re-inventing this wheel, gradually learning how to kern and space etc (ugh) - but there must be a better way than re-writing the whole of "fonts" from scratch?


As far as I can see, there are two parts to this:

  1. How do we render text using a font?
  2. How do we display the output?

For 1 (how to render), Apple provides MANY ways to get the "correct" rendered output - but the "easy" ones don't support OpenGL (maybe some of the others do - e.g. is there a simple way to map CoreText output to OpenGL?).

For 2 (how to display), we have shaders, we have VBOs, we have glyph-textures, we have lookup-textures, and other tecniques (e.g. the OpenGL 4.1 stuff linked above?)

Here are the two common OpenGL approaches I know of:

  1. Texture atlas (render all glyphs once, then render 1 x textured quad per character, from the shared texture)
    1. This is wrong, unless you're using a 1980s era "bitmap font" (and even then: texture atlas requires more work than it may seem, if you need it correct for non-trivial fonts)
    2. (fonts aren't "a collection of glyphs" there's a vast amount of positioning, layout, wrapping, spacing, kerning, styling, colouring, weighting, etc. Texture atlases fail)
  2. Fixed string (use any Apple class to render correctly, then screenshot the backing image-data, and upload as a texture)
    1. In human terms, this is fast. In frame-rendering, this is very, very slow. If you do this with a lot of changing text, your frame rate goes through the floor
    2. Technically, it's mostly correct (not entirely: you lose some information this way) but hugely inefficient

I've also seen, but heard both good and bad things about:

  1. Imagination/PowerVR "Print3D" (link broken) (from the guys that manufacture the GPU! But their site has moved/removed the text rendering page)
  2. FreeType (requires pre-processing, interpretation, lots of code, extra libraries?)
  3. ...and/or FTGL http://sourceforge.net/projects/ftgl/ (rumors: slow? buggy? not updated in a long time?)
  4. Font-Stash http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (high quality, but very slow?)
  5. 1.

Within Apple's own OS / standard libraries, I know of several sources of text rendering. NB: I have used most of these in detail on 2D rendering projects, my statements about them outputting different rendering are based on direct experience

  1. CoreGraphics with NSString
    1. Simplest of all: render "into a CGRect"
    2. Seem to be a slightly faster version of the "fixed string" approach people recommend (even though you'd expect it to be much the same)
  2. UILabel and UITextArea with plain text
    1. NB: they are NOT the same! Slight differences in how they render the smae text
  3. NSAttributedString, rendered to one of the above
    1. Again: renders differently (the differences I know of are fairly subtle and classified as "bugs", various SO questions about this)
  4. CATextLayer
    1. A hybrid between iOS fonts and old C rendering. Uses the "not fully" toll-free-bridged CFFont / UIFont, which reveals some more rendering differences / strangeness
  5. CoreText
    1. ... the ultimate solution? But a beast of its own...

Answer

Adam picture Adam · Sep 2, 2013

I did some more experimenting, and it seems that CoreText might make for a perfect solution when combined with a texture atlas and Valve's signed-difference textures (which can turn a bitmap glyph into a resolution-independent hi-res texture).

...but I don't have it working yet, still experimenting.


UPDATE: Apple's docs say they give you access to everything except the final detail: which glyph + glyph layout to render (you can get the line layout, and the number of glyphs, but not the glyph itself, according to docs). For no apparent reason, this core piece of info is apparently missing from CoreText (if so, that makes CT almost worthless. I'm still hunting to see if I can find a way to get the actual glpyhs + per-glyph data)


UPDATE2: I now have this working properly with Apple's CT (but no different-textures), but it ends up as 3 class files, 10 data structures, about 300 lines of code, plus the OpenGL code to render it. Too much for an SO answer :(.

The short answer is: yes, you can do it, and it works, if you:

  1. Create CTFrameSetter
  2. Create CTFrame for a theoretical 2D frame
  3. Create a CGContext that you'll convert to a GL texture
  4. Go through glyph-by-glyph, allowing Apple to render to the CGContext
  5. Each time Apple renders a glyph, calculate the boundingbox (this is HARD), and save it somewhere
  6. And save the unique glyph-ID (this will be different for e.g. "o", "f", and "of" (one glyph!))
  7. Finally, send your CGContext up to GL as a texture

When you render, use the list of glyph-IDs that Apple created, and for each one use the saved info, and the texture, to render quads with texture-co-ords that pull individual glyphs out of the texture you uploaded.

This works, it's fast, it works with all fonts, it gets all font layout and kerning correct, etc.