I'm planning in getting some lightmapping working in my engine. I had originally planned on having the engine use dynamic lights with shaders exclusively, but it's just not nearly as fast as lightmaps. And practically, most of the lights in a game world are static anyway.

Rather than calculate the lightmaps on the CPU though, I want to do it on the GPU via shaders, and "bake" my lighting shaders into the lightmap.

To do that, I know that I need to write the vertex and texture coordinates of the object I'm lightmapping into a texture, and upload that to the GPU. In the shader, I can use the data in those textures to calculate the lighting, and shadows. To get it back to the CPU, I know I'm supposed to draw into an off-screen render target, which I'll use FBOs for.

What I can't figure out (or visualize, I guess), is how rendering into the FBO translates into a reusable lightmap.

I mean, to render the lightmap, I render into a screen-sized quad, where the size of the screen is the size of the lightmap (my offscreen render target). But if the input for every pixel of that texture, is the world space position of an object, and the light is calculated per-pixel, then I'm getting per-vertex lighting then, aren't I? How do I do it per-pixel? Or is that even possible with this method?

(Hope that made sense. I'm confusing myself a little here)