Thread: Lightmapping on the GPU

  1. #1
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071

    Lightmapping on the GPU

    I'm planning in getting some lightmapping working in my engine. I had originally planned on having the engine use dynamic lights with shaders exclusively, but it's just not nearly as fast as lightmaps. And practically, most of the lights in a game world are static anyway.

    Rather than calculate the lightmaps on the CPU though, I want to do it on the GPU via shaders, and "bake" my lighting shaders into the lightmap.

    To do that, I know that I need to write the vertex and texture coordinates of the object I'm lightmapping into a texture, and upload that to the GPU. In the shader, I can use the data in those textures to calculate the lighting, and shadows. To get it back to the CPU, I know I'm supposed to draw into an off-screen render target, which I'll use FBOs for.

    What I can't figure out (or visualize, I guess), is how rendering into the FBO translates into a reusable lightmap.

    I mean, to render the lightmap, I render into a screen-sized quad, where the size of the screen is the size of the lightmap (my offscreen render target). But if the input for every pixel of that texture, is the world space position of an object, and the light is calculated per-pixel, then I'm getting per-vertex lighting then, aren't I? How do I do it per-pixel? Or is that even possible with this method?

    (Hope that made sense. I'm confusing myself a little here)
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  2. #2
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    That is not what I would term light-mapping per se. What you are talking about is post-processing effects via shaders. If you are going to use that for light mapping you will have some very big problems.

    Let's say we are looking into a corner with a lamp in it. If you render a screen-size quad to illuminate the scene your quad will not line up with the walls since they are at an angle to the camera. You could use this for a post-process bloom or glare effect from the light but for actual lightmapping you would have to use a lightmap texture and apply that texture to each face or object that was affected by the light. AFAIK lightmapping does not require a screen-aligned quad but is more a trick of multi-texturing to achieve illumination. The trick is generating the texture to apply to your surface to make it appear lit.

    I did not explain it very well so here is an article for OpenGL about how to do it.
    http://www.3ddrome.com/articles/dynamiclightmaps.php

  3. #3
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    Well, what I wasn't talking about wasn't post-processing either exactly (although it could be extended to that). But I thought about it more and did some more reasearch, and the method I described would indeed give me per-vertex lighting. Basically, it would take in a position map with each pixel containing the world-space vertex information for the object, and then output a map with light values for each vertex, where each pixel of the output lightmap represents the vetex of the input position map.

    What I'm wondering now is, if there's way to take the method described in that article and move it to the GPU. Perhaps rendering each face of an object to the lightmap FBO and storing those?

    EDIT: nevermind. i'm pretty sure that won't work either.
    Last edited by psychopath; 05-24-2007 at 07:27 AM.
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  4. #4
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    I think most of those approaches would take more or equal time to do on the GPU dynamically as it would just to calculate the lighting values per-pixel.

    Light mapping is best pre-computed or you probably won't be saving any time.

  5. #5
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    I don't mean to use the GPU to do it dynamically. It would still be static, but calculated on the GPU to (hopefully) be easier to program, since I can just use the lighting shaders i'm using dynamically now, and store them in a static lightmap. Then the only thing that has to be done by the shaders at runtime is bumpmapping and stuff.
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Using the GPU to do some processing
    By h3ro in forum C++ Programming
    Replies: 16
    Last Post: 03-05-2008, 12:10 PM
  2. Gpu
    By cs05pp2 in forum C++ Programming
    Replies: 4
    Last Post: 05-18-2006, 10:41 AM
  3. Skills of device driver writer for GPU vs Game Developer
    By Silvercord in forum Game Programming
    Replies: 9
    Last Post: 02-15-2003, 06:48 PM