# Thread: Coordinate system

1. ## Coordinate system

G'day,

Anyway I've been messing around with OpenGL in 2D lately (instead of studying for exams, but meh... ). I've done a bit so far, including variable-width fonts from raw bitmaps.

My problem is, I'm using 2D user-space coordinates for everything and some things must be per-pixel ... but I use a 0.0 -> 1.0 approach for everything else. I constantly have to modifiy the orthographic projection about 10+ times per frame. What's the best way to avoid this? But still have a generic resolution independant coordinate system?

Would it be wise to "abstract" the drawing routines all of which are in user-space coordinates of 0.0 -> 1.0 and convert them to per-pixel coordinates if required? I suppose that'd be less expensive than the alternative?

0.0 -> 1.0
Code:
```(0, 0) ---------------> (1.0, 0.0)
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
/---------------------> (1.0, 1.0)
(0.0, 1.0)```
Resolution, say 800x600 = per-pixel
Code:
```(0, 0) ---------------> (800, 0)
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
|                     |
/---------------------> (800, 600)
(0, 600)```
Thanks! :-)

2. I would not recommend normalizing every single coordinate in your system. Intead since you are in screen space just remember to translate to the origin first, then perform rotations and anything else, and then translate back to where you want your object.

This still involves some matrix multiplies but it does not require sqrtf() every time you want to move an object.

It is perfectly ok to work in screen space as long as you follow the paradigm I just laid out. I see no benefit in always working with normalized coordinates in screen space.

3. Thanks! But how do I ensure a quad for example is the same size in 800x600 as it is in 1024x768? That's the bit I don't understand.

For example;
Code:
```#include <GL/glfw.h>

#include <stdio.h>

int main(void)
{
int running = GL_TRUE;
int width = 800, height = 600;

glfwInit();
glfwOpenWindow(   width, height, /* width, height */
8, 8, 8,       /* red bits, green bits, blue bits = 8 + 8 + 8 = 24bits */
8,             /* alpha buffer bits */
0,             /* depth buffer bits */
0,             /* stencil buffer bits */
GLFW_WINDOW
);

/* orthographic projection */
glViewport(0, 0, width, height);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();

gluOrtho2D(0.0f, width, height, 0.0f);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

/* loop */
while(running == GL_TRUE)
{
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();

/* I'd perform scaling etc here? We are at the origin after all */

/* I want the object in the middle, move to 400, 300 */
glTranslatef(400, 300, 0);

/* --- */
glColor3ub(255, 255, 255);
/* vertex: top left, bottom left, bottom right, top right
* NOTE: this is in pixels and won't scale with the scren resolution */
glBegin(GL_QUADS);
/* top left */
glVertex2i(0, 0);
glVertex2i(0, 100);
glVertex2i(100, 100);
glVertex2i(100, 0);
glEnd();

/* --- */

glfwSwapBuffers();
running = glfwGetWindowParam(GLFW_OPENED);
}

glfwTerminate();
return 0;
}```
I know you're a DX man, but it's all the same. I've seen other people work in a fixed resolution (say 1000x800) and "normalize". Although that's the same idea I had really...

Research it seems a lot of people normalize... but I really don't want to do that.
[/edit]

4. I'm very far from an expert on this sort of thing, but it seems to me that you're wanting to represent coordinates internally as a number from 0.0 to 1.0, and have 1.0 always represent the size of the screen, no matter what that may be, right?

I'd probably just modify each coordinate before you pass it to OpenGL functions. Of course, that's probably what you're already doing.

I don't think it's too inefficient to do a floating-point multiplication every time you need a coordinate. After all, you're already storing coordinates as floating-point numbers.

Meh. I'm not much help, sorry.

5. You're initial idea of hierarchical draw routines is what I've used in the past to deal with such issues. I.E. have a Rectangle class, which contains a draw() method, of which the first instruction is to pop the current matrix/attribs onto the stacks, and continue to draw a rectangle in your "meta" coordinates.
Now when ever you need to tailor your rectangle to a certain screen dimension, first perform the desired scaling then call Rectangle::draw, which will also scale as appropriate.

6. > I'm very far from an expert on this sort of thing
Nor am I (yet :-)). But thanks for re-iterating my idea

Thanks for the idea @nthony, I'll certainly do that. I've been doing much research in this area, and there seems to be 2 ideas, or variations of:

1. Normalize every coordinate either to 1.0 or as a percentage ie 100.0
2. Work in a "design" pixel space, and translate to the actual screen coordinates at runtime

And that's prompted me to abstract all my rendering, because "If you find yourself having to call OpenGL everytime you want to draw something then you're doing it wrong". Still don't know what I'm going to do, each method seems to have a downside and I'll have to choose the one that best suits my task I guess.

Thanks again!

7. 2. Work in a "design" pixel space, and translate to the actual screen coordinates at runtime
Sounds the best option to me since this is what happens in 3D as well. In DirectX I avoid DrawPrimitive() calls by batching items. However during transformation I still have to lock a portion of the buffer and transform the vertices.

My suggestion to you is to work in screen space and then run everything through a final matrix that will orient it correctly. This requires the least amount of work and does not explicity force every object to normalize.

Another option is to simply placing everything in local space around the origin and then translate the whole system to width/2,height/2 and render.

8. Well Bubba I went with your original idea*, it seems to be working well. And it seems to be the most common way for OpenGL 2D applications.

GPUs are getting very fast these days so I'm not too worried

* Because it gave me a concept of a camera. Research showed that you should not use the projection matrix for camera transforms, hundreds of people do this -- and it's very, very wrong!

9. Awesome! It worked fantastically. Thanks a lot everyone, I actually used all the ideas combined! Granted this means a lot of matrix work, but that shouldn't make a difference.

I integrated it with my fonts, this means I have resolution independant rendering, fonts and scaling :-).
Take a look;

The code is available via svn at http://code.google.com/p/harw

10. The projection matrix in Direct3D is usually set once at init and is never messed with again unless a scene requires it to be altered. Normally projection is a set and forget item in my projects.

However for some shaders you must pass in the world view projection matrix and it is more efficient to multiply the matrices on the video card than in software. Computing a projection matrix is actually not costly at all but you really should only have to compute it once.

11. Originally Posted by Bubba
The projection matrix in Direct3D is usually set once at init and is never messed with again unless a scene requires it to be altered. Normally projection is a set and forget item in my projects.

However for some shaders you must pass in the world view projection matrix and it is more efficient to multiply the matrices on the video card than in software. Computing a projection matrix is actually not costly at all but you really should only have to compute it once.
Do you reset the projection matrix when your scene is resized? ie when the resolution changes...? Currently I do, but that means things are a bit funny (like fonts) since the resolution they were generated is no longer -- and I scale to keep them the same size. Which basically means, a resolution resize is different from a resolution resize and a restart.

Now I can see why you set it once :-). Thanks again Bubba.

I really enjoy this stuff, as little as I understand... but I may try and enter this area as a career after Uni, I understand it's very hard! Either that, or I'll be writing banking software for the next 50 years.

12. I could see altering the projection matrix for some cinematic field of view effects. It could also change mid level during a loading checkpoint. If the game is using an ortho matrix for the HUD then the projection matrix is definitely being changed for that. Effects like the HUD in fighter sims and space sims are also probably using ortho's inside of a viewport.

I'm not saying you can never change it but at least in Direct3D the SDK says that changing the projection matrix alters a lot of internal variables in D3D. Because of this it is not recommended to frequently change the projection matrix.

In most of my current projects I only set the projection matrix once and leave it alone till program exit.

13. I scale the hud on the model view matrix, is that bad practice?

ie;
Code:
```void flipd_render_camera_mode(float x, float y)
{
float scaleX = flipd_render_scale_factor((float) renderState.renderWidth, renderState.cameraViewX),
scaleY = flipd_render_scale_factor((float) renderState.renderHeight, renderState.cameraViewY);

/* set the current viewport */
renderState.viewportWidth = renderState.cameraViewX;
renderState.viewportHeight = renderState.cameraViewY;

/* reset model view matrix */
glLoadIdentity();
glScalef(scaleX, scaleY, 1.0f);     /* scale everything so it fits in our "view distance" */
glTranslatef(x, y, 0.0f);
}

/* render over the top, with 0->width and 0->height
* TODO: Use the projection matrix? Rather than the model view matrix...
* It works the same way either way... but we'd have to push the projection matrix stack
* then pop it off after we'd finished "hudding" */
void flipd_render_overlay_mode(const float width, const float height)
{
float scaleX = flipd_render_scale_factor((float) renderState.renderWidth, width),
scaleY = flipd_render_scale_factor((float) renderState.renderHeight, height);

/* current viewport */
renderState.viewportWidth = width;
renderState.viewportHeight = height;

glLoadIdentity();
glScalef(scaleX, scaleY, 1.0f);
}```
Which means I don't have to push/pop/push the projection matrix. But it wouldn't work for 3D...

Or, is this the way they indend?
Code:
```void flipd_render_overlay_mode(const float width, const float height)
{
/* current viewport */
renderState.viewportWidth = width;
renderState.viewportHeight = height;

glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();

gluOrtho2D(0.0f, width, height, 0.0f);

/* use the modelview matrix for all our stuff */
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}

void flipd_render_overlay_mode_end(void)
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();

glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}```
Which means camera transformations will apply to anything drawn after flipd_render_overlay_mode_end() which is cool. Hmm, Apparently this is the way to go![/edit]. If it isn't just let me know

Popular pages Recent additions