I'm writing a library that deals with 2D graphical shapes.
I'm just wondering why should my coordinate system range from [-1, 1] for both the x and y axis
instead of [0, width] for x and [0, height] for y ?
I went for the latter system because I felt it was straight forward to implement.
From Jim Blinn's A Trip Down The Graphics Pipeline, p. 138.
Let's start with what might at first seem the simplest transformation: normalized device coordinates to pixel space. The transform is
s_x * X_NDC + d_x = X_pixel
s_y * Y_NDC + d_y = Y_pixel
A user/programmer does all screen design in NDC. There are three nasty realities of the hardware that NDC hides from us:
The actual number of pixels in x and y.
Non-uniform pixel spacing in x and y.
Up versus down for the Y coordinate. The NDC-to-pixel transformation will invert Y if necessary so that Y in NDC points up.
...
s_x = ( N_x - epsilon ) / 2
d_x = ( N_x - epsilon ) / 2
s_y = ( N_y - epsilon ) / (-2*a)
d_y = ( N_y - epsilon ) / 2
epsilon = .001
a = N_y/N_x (physical screen aspect ratio)