Deferred Shading

This is a discussion on Deferred Shading within the Game Programming forums, part of the General Programming Boards category; I'm planning on switching my engines rendering system from the traditional multi-pass system, to a deferred shading system. I've been ...

  1. #1
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,070

    Deferred Shading

    I'm planning on switching my engines rendering system from the traditional multi-pass system, to a deferred shading system. I've been reading through various articles on the subject, and I understand the concept, but i'm having trouble figuring out how I would go about implementing it.

    I know I have to pack the vertex positions, normals, material data and other info into a floating point texture. That texture is sent to the vertex fragment processors where the scene is reconstructed from the data, and lighting is applied to the image with a shader.

    Now, suppose I want to use normal mapping in my final lighting shader. I just have to pack the tangent vector into the G-buffer aswell, since I can calculate the bitangent on the GPU during the post-process step. However, I need to get the normals out of the normal map. There can be many normal maps in a scene, since most objects are usually textured differently. Do I make a second rendering pass that renders the scene with normal maps only, and then copies the screen into a texture? Would I then use this texture with the G-buffer "texture" to perform the lighting on the GPU? My lighting shader also uses gloss maps. With the method mentioned above, I would need to make three rendering passes before actually doing any lighting. Is this still faster than num_objects*num_lights rendering?

    One of the key advantages of deferred shading is supposed that the scene need only be rendered once. To acheive this, would I just store scene info in VBOs (vertex buffer objects)?

    I'm a little confused, so hopefully some of you can help clear this up .

    Thanks.
    Memorial University of Newfoundland
    Computer Science

    Mac and OpenGL evangelist.

  2. #2
    Super Moderator VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,598
    I've looked at deferred shading but to me it seems pointless. I'm not sure why you are using normal maps when you can just as easily pre-compute the normal and in the vertex shader multiply the normal by the transposed world matrix to transform the normal and do your calculations from there. Some shaders don't need the transformed normal so again I'm not seeing the use for a normal map - unless you want to add detail to a flat texture at the sub-pixel level.

  3. #3
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,070
    The only real gain you get from deferred shading is if the scene requires many lights. Rendering the scene once for each of these lights would choke up the graphics card. Rendering the scene once and then drawing 2 polygons for each light is much more efficient. If the scene is very low-poly, the overhead from creating the G-buffer usually slows things down more than 'regular' rendering.

    Quote Originally Posted by Bubba
    unless you want to add detail to a flat texture at the sub-pixel level
    That's generally the point, I thought.
    Memorial University of Newfoundland
    Computer Science

    Mac and OpenGL evangelist.

  4. #4
    vae victus! skorman00's Avatar
    Join Date
    Nov 2003
    Posts
    594
    With the method mentioned above, I would need to make three rendering passes before actually doing any lighting. Is this still faster than num_objects*num_lights rendering?
    I would think so, as long as num_lights > 3 =)

    Couldn't you also use multitexturing, and set up your shaders so that they know texture0 is the surface texture, texture1 is the normal map, and texture2 is the gloss map?

  5. #5
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,070
    Quote Originally Posted by skorman00
    Couldn't you also use multitexturing, and set up your shaders so that they know texture0 is the surface texture, texture1 is the normal map, and texture2 is the gloss map?
    Yep, I believe so. I've been playing with this in RenderMonkey, and I've got a better idea how this works now.

    Originally, I failed to see that you couldn't just pile all the data into one texture, and ship it off to the GPU.

    I've got it down to...
    Pass one:
    Draw model
    -Vertex shader passes vertex position to the fragment shader.
    -Fragment shader writes vertex.xyz into the rgb componants of the render texture.

    Pass two:
    Draw model
    Apply normal map texture
    -Vertex shader passes vertex normal to the fragment shader
    -Fragment shader extracts normals from the normal map, applies filtering, and adds to vertex normal, where the final normal is normalized. The final normal.xyz is written to the rgb componants of the RT.

    Pass three:
    Draw model
    Apply base texture
    -Vertex shader passed texture coords to fragment shader (a pretty standard operation)
    -Fragment shader applies texture to model (also pretty standard)

    The subsequent passes are similer, for a gloss map (same steps as the normal map, only the gloss map can be packed into the RTs alpha channel). The tangent vector is also written to a texture, much like the position.

    The result are several screen sized texture maps that are loaded by the lighting shader, where the information is extracted, and the lighting calculations are done. Kind of like ray-traceing, only completely different.

    After the lighting is done, it can be written to another RT, for blurring and fancy HDR effects.

    It sounds like a lot of passes, but on modern hardware (Radeon 9500 and up, GeForce FX and up, or equivalent), nearly all of these render textures can be written at once, bringing it down to one pass to fill the G-buffers, and another to do light calculations, and apply the result to a quad encapsulating the lights sphere of influence.

    Now all I have to do is add FBO support to my own engine, and hope I can get this stuff working there.
    Last edited by psychopath; 08-23-2006 at 11:05 PM.
    Memorial University of Newfoundland
    Computer Science

    Mac and OpenGL evangelist.

  6. #6
    Super Moderator VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,598
    Im having a very difficult time adding shaders into the framework of my code. We've come a long way but there still seems to be a level of detachment and abstraction between shaders and actual code.

    Maybe this fine line will diminish with time eh?

    What video cards need is a way to implement shader 'slots'. Then you could just specify which shader to use during the drawing and the video card would have the shader in VRAM. It does now, but it only allows one at a time which is a bit limiting and requires several calls to render a scene with multiple types of shaders.

    But with cards moving to 512MB and eventually 1GB I'm sure we will see improvements to the current model very soon.

    What I've done is implemented a map of shaders that can be referenced by name. Then you simply tell the framework to use shader "DiffuseShader". All of the shaders are pre-compiled and thus the framework only needs to return the pointer to the shader interface and tell the effects framework to use that shader.
    It works well, but I still have issues with inputs and outputs and not enough texture slots and TEXCOORDn slots.
    Each object can have a shader string that acts as the key to the actual shader interface. So the render code looks at this shader string and if it does not match the current one, changes it, updates the last shader variable, and renders. Using values would be better since a string comparison takes more time, but it's the best I've come up with yet.


    How are you doing bumpmapping, lighting and all in your shaders? Bump requires at least 2 textures (env light map and bump map) plus any other textures you may need to blend in such as the base textures and detail textures. Very quickly you run out of slots.

    Since you seem to be the only one dabbling in shaders I have no one else to talk to about this complicated topic.
    Last edited by VirtualAce; 08-25-2006 at 12:19 AM.

  7. #7
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,070
    My current shader framwork is similar to yours, but i'm not using string comparison.

    My base class, CObject, contains an integer varible named shaderID. When an object is created, shaderID is set the the current number of shaders, numShaders, kept by the current engine context's GLSL manager (CGlslManager). After setting shaderID, NewShader is called (which is a memeber of CGlslManager). This sets programs[numShaders] to a new instance of CGlsl, which loads and compiles the specified shader. numShaders is then incremented. When drawing an object, I just have to use glslManager->programs[obj->shaderID], which points to the shader in the list associated with the current object. Unfortunatly, this method pretty much assumes that you don't want to switch an objects shader at runtime.

    For bumpmapping I just use a dot3 normal map. Not sure where your getting a two texture requirement (unless were talking about two different things).

    My shaders:

    vertex shader:
    Code:
    varying vec3 T;
    varying vec3 B;
    varying vec3 N;
    varying vec3 v;
    varying vec2 varTexCoord;
    
    void main(void)
    {
       v = vec3(gl_ModelViewMatrix * gl_Vertex);
       
       T = normalize(vec3(gl_MultiTexCoord4.xyz)); //nvidia workaround
       N = normalize(gl_NormalMatrix * gl_Normal);
       B = normalize(cross(T,N));
    
       gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    
       varTexCoord = vec2(gl_MultiTexCoord0);
    }
    fragment shader:
    Code:
    uniform sampler2D baseMap;
    uniform sampler2D bumpMap;
    uniform sampler2D glossMap;
    uniform float bumpiness;
    uniform float glossiness;
    
    varying vec3 T;
    varying vec3 B;
    varying vec3 N;
    varying vec3 v;
    varying vec2 varTexCoord;
    varying vec2 varLightCoord;
    
    float saturate( float inValue)
    {
    	return clamp(inValue, 0.0, 1.0);
    }
    
    vec4 light()
    {
    	vec4 color;
    	
             vec3 vBump;
    	vBump = (2.0*(texture2D(bumpMap, varTexCoord).xyz))- 1.0;
    	vec3 smooth = vec3(0.5, 0.5, 1.0);
    	vBump = mix( smooth, vBump , bumpiness );
    	vBump = normalize(vBump );
    
    	vec3 light;
    	vec3 L = normalize(gl_LightSource[0].position.xyz - v);
    	light.x = dot(T,L);
    	light.y = dot(B,L);
    	light.z = dot(N,L);
    	
    	float distSqr = dot(light,light);
    	light*=distSqr;
    	
    	vec3 E = normalize(-v);
    	
    	float dist = length(light);
    	float atten = clamp(1.0 - (-(1.0/1000.0)) * sqrt(distSqr), 0.0, 1.0);
    	
    	//calculate Ambient Term:
    	vec4 Iamb = gl_LightSource[0].ambient*gl_FrontMaterial.ambient;
    	
    	//calculate Diffuse Term:
    	vec4 Idiff = (gl_LightSource[0].diffuse * max(dot(vBump,light), 0.0))*texture2D(baseMap, varTexCoord);
    	
    	// calculate Specular Term:
    	vec4 Ispec = (texture2D(baseMap, varTexCoord)*gl_FrontMaterial.specular) * saturate(4.0*dot(reflect(-E,vBump),light)-3.0);
    	Ispec *= (Ispec+(Idiff*vec4(0.25,0.25,0.25,1.0)))*(texture2D(glossMap, varTexCoord)*glossiness);
    	
    	color = gl_FrontMaterial.emission+Iamb+atten*(gl_FrontMaterial.ambient + Idiff + Ispec);
        return color;
    }
    
    void main (void)
    {
    	gl_FragColor = light();
    }
    This works pretty good albeit the lighting is probably inaccurate. I'm almost positive I'm doing something wrong in that shader somewhere.

    It only requires five inputs, (baseMap, bumpMap, glossMap, bumpiness, and glossiness). The rest is pulled from OpenGL built-in types (such as light info and color materials).
    Memorial University of Newfoundland
    Computer Science

    Mac and OpenGL evangelist.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. run windows exe
    By munna_dude in forum Linux Programming
    Replies: 3
    Last Post: 10-10-2007, 02:12 AM
  2. Gouraud Shading problem
    By Duetti in forum Game Programming
    Replies: 8
    Last Post: 12-10-2003, 01:34 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21