How to transform 2D world to screen coordinates OpenGL

bgroenks picture bgroenks · Apr 21, 2013 · Viewed 27.2k times · Source

I'm currently working on implementing an OpenGL powered renderer into a 2D game engine.

Because the OpenGL screen coordinate space is [-1,1], I'm a little confused as to how it should be interfaced with a generic, Cartesian 2D world coordinate system.

Let's say the viewport in my world is [-500,-500] to [1200, 1200], where [0, 0] is the world's origin. Do I only need to translate and scale down to coordinates between -1 and 1? Or is there some other form of transformation that needs to be performed?

How do you calculate where to draw objects on screen that have defined positions in your own coordinate system?

I would appreciate an explanation with and without glOrtho (so we can use the Z axis as well for perspective effects).

Answer

Andreas Haferburg picture Andreas Haferburg · Jun 9, 2013

First, OpenGL uses multiple coordinate systems, so there is no "the OpenGL coordinate system". What you're referring to are normalized device coordinates (NDCs), where all three coordinates are in the range [-1, 1]. The different coordinate systems and their names are explained here, in the section "9.011 How are coordinates transformed? What are the different coordinate spaces?". 1)

Secondly, to avoid confusion, in OpenGL the term "viewport" refers to the part of the window that you're rendering to, and it's in window coordinates. In your question you used it to describe the portion (l,r,t,b)=(-500, -500, 1200, 1200) of your world that you want to render, which is in world coordinates.

You asked how to "calculate where to draw objects on screen". What you need to do is define a transformation (a 4x4 matrix) that maps from one coordinate system into another. Your 2D world is given in world coordinates, so you need to define a matrix that transforms world coordinates into NDCs, i.e. a projection matrix. In your shaders you then simply multiply your vertices with this projection matrix, and you get NDCs. glm::ortho/glOrtho computes such a projection matrix. As for the perspective projection, it's not clear what you want to do, but you should experiment with the perspective and lookat functions in glm.

To be clear, you define vertices in whatever coordinate system you want (which is called the world coordinate system), and simply draw these vertices. Your vertex shader's job is to apply the transformation you defined.

Also note that you specified a square, and typically that's not what you want. Monitors and most windows are not square, so if you map that square onto a typical viewport, you would get a distorted view of your world. You need to factor in the aspect ratio (width:height) of the viewport. I've tried to explain that here.


1) As a side note, the FAQ is quite old, and refers to ancient versions of OpenGL. Nowadays, programmers are expected and encouraged to manage both the model-view and the projection matrices themselves, since you need them in your shaders. I highly recommend glm, it's header-only thus very easy to integrate, and has nice syntax that mirrors GLSL.