Thread: OpenGL: Translucency and Glow

  1. #1
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879

    OpenGL: Translucency and Glow

    Hey guys,
    I was looking through this article about a glowing effect here:
    http://www.gamasutra.com/features/20.../james_pfv.htm

    It suggests using the alpha channel of an image to store the "brightness" of any glowing parts of a given texture - however, I assume that this precludes the use of the alpha channel for translucency, which is unfortunate. I suppose it's possible to store the 'glow' map as a separate texture, but AFAIK this would double the memory requirement. Is there some way (for example, something like dual alpha channels) that I could use both translucency and glow, with the same texture?

    Please post thoughts, comments.

    Thanks!
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  2. #2
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Glow effects are all post process so all of your objects should already have their correct textures on them. The transparency for these objects is handled in their respective shaders. What you need is the resulting alpha from the scene being rendered to texture. You will have to render the scene into a shader that will render the alphas to a texture and will blur the result. Then take the original render to texture and the result from the alpha shader and blend these together to create the final texture. Render the final texture to a screen aligned quad.

    Everything prior to post-process remains exactly as it is. You are only interested in the entire scene's alpha and this has nothing to do with the transparency on various objects.

    You will most likely need to use a floating point render target to blur the alpha on. You will likely need to downsample the main render to a manageable size since doing a bloom on the actual render would be insanely slow.

    The main idea of bloom is to only bloom bright areas which means that each pixel is run through a brightness filter and a tone map. After you finally figure out which areas do need bloomed then you can begin the blooming process. The Tron style blur or glow shader you are talking about is no different except that it does not pass the pixels through a filter and instead uses the alpha channel as the areas to brighten. The threshhold in the Tron algo is that if an object's alpha channel is below a certain level it won't be affected by the bloom. I think you will need an alpha shader because the alpha channel in that effect is being treated differently than what is the norm so your shader will have to have logic to account for this behavior. Alpha here does not determine level of transparency but determines brightness and bloom factor.
    Last edited by VirtualAce; 12-01-2008 at 07:25 PM.

  3. #3
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    Thanks Bubba, I appreciate the reply. There are two significant limitations that I'm working with, though:
    1. I'm just starting on graphics, and I don't really understand shaders...
    2. I'm trying to do this on the iPhone -> there are no shaders

    Uhh... let me rephrase what you've said, as I understand it, and correct me if I'm wrong:
    1. The shader acts as a post-processing mechanism, which lets you define per-pixel processing behaviour.
    2. [some specifics on implementing tron-style bloom using shaders]
    3. Typically, what I want (transparency + bloom) would be implemented by rendering the whole scene (transparency done at render time) to a shader's pipeline(?), which would decide if a given pixel is "bright" or not, and apply bloom on bright pixels only.
    4. The Tron algo uses alpha for brightness/bloom; transparency would have to be determined some other unspecified way.
    5. Real-time bloom needs to be done on a downscaled render for performance reasons.

    So tell me if I'm wrong, but in a nutshell - typically, Bloom is applied to anything that's bright, and we let the alpha determine transparency; but in Tron, we figure out transparency in some clever way, and let alpha determine bloom?
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  4. #4
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Transparency is figured out when the meshes are drawn. This step has nothing to do with the bloom. Bloom operates on the final scene and thus is a post process effect. So if I have a terrain scene with trees and clouds and the clouds have some level of transparency then the scene is rendered as normal. This would be the end of the rendering except that now you want to do some bloom or some post process treatment.
    Usually what happens is this:

    1. Scene is downsampled to 1/8 or 1/16 its size
    2. Some type of luminance filter is applied and the result is a new texture
    3. Some type of tone mapping and/or exposure or some other post process treatment is done to the new texture
    4. A gaussian filter is applied to the new texture (alternatively for less expensive and less elegant bloom a Kawase bloom filter can be used)
    5. The new texture and original scene texture are now blended back together and output as a final texture.
    6. A screen-aligned quad is drawn using the final texture as its texture.
    7. The final render is presented.

    So at a very basic level the shader finds the bright areas, blurs them somehow, and then blends the result back into the original scene texture. Downsampling to non-floating point textures and performing gaussian filtering on non-floating point textures will result in odd blooming artifacts. Normal textures cannot represent HDR lighting since HDR is all about discrete values.

    If you don't have floating point textures you can fake HDR by using a Kawase bloom filter. Basically instead of using one floating point texture and doing a gaussian filter on it you have two non-floating point textures. Once you perform your luminance pass and your tone map pass you then do an average blur and save it in the second texture. Next time through you blur the second texture and save the result in the first and so on and so on. 6 to 8 ping-pong iterations like this usually results in a nice blur. Then when done you blend the resulting texture into the original scene texture and the rest of the process is the same. I have implemented this effect and it does look very nice.

    It is important to remember that currently the only way to do HDR and bloom is by using post process effects. That is effects that operate on the final rendered scene. So all effects prior to this operate as in-process effects. They are the effects that actually render the objects. These can be anything from diffuse lighting, specular lighting, hemispherical lighting, radiosity, point lights, bump mapping, parallax mapping, skinning, etc,. Most of the lighting effects perform very well with a post process HDR and/or bloom.
    Last edited by VirtualAce; 12-02-2008 at 12:24 AM.

  5. #5
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    >>2. Some type of luminance filter is applied and the result is a new texture
    The purpose of this is to prevent dark pixels from being blurred/bloomed?

    >>3. Some type of tone mapping and/or exposure or some other post process treatment is done to the new texture
    These are related, but separate effects from the bloom, correct?

    >>[1,5,6]
    Isn't there a power-of-two limitation on texture dimensions? Would this then require rendering the initial and final displays to textures that are in fact larger than the desired screen resolution, and then cropping the scene to the correct dimensions?


    Thanks for the help Bubba. I was originally planning on doing the blur in two passes (the whole 'separable' thing), by rendering the downsampled scene several times at different offsets/alpha values corresponding roughly to a gaussian curve, in the horizontal direction first and then the vertical. I'll look into the Kawase filter though, it sounds like it'll probably yield better results.
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  6. #6
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    I did some experimentation finally. I didn't really get any definitive information on what Kawase Bloom actually is, but it seems to me that the general idea is doing multiple passes of almost any filter will eventually yield an approximation of the Gaussian.

    I tried a Gaussian blur with a 'range' of 10, and then a 2-pass linear blur (kawase?) with range 5, and then a 5-pass linear blur with range 2. It seems to me that the Gaussian gave the best performance (on iPhone platform, render-to-texture seems to be a very expensive operation). Is there any particular reason why the Kawase should be faster? It seems like you'd have to do a similar number of operations either way, to get the same blur radius.
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  7. #7
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Kawase is faster because you are approximating a gaussian filter. In any shader you can only sample so many pixels before you run out of samplers. There are ways around this but they are costly and the computations are costly. Kawase bloom is simple and makes use of the hardware to arrive at the final bloom at the expense of a less elegant bloom. Kawase also works on non-floating point textures and gaussian usually requires floating point textures which can also hurt performance.

  8. #8
    Carnivore ('-'v) Hunter2's Avatar
    Join Date
    May 2002
    Posts
    2,879
    So the main issue is scalability then (limited by number of samplers)? Does that mean a Gaussian would be equivalent (except floating point textures) if there were sufficient samplers to cover all pixels in the specified radius? Also, is each pass of the Kawase linear, or a smaller gaussian? I was never clear on that point.

    Thanks.
    Just Google It. √

    (\ /)
    ( . .)
    c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.

  9. #9
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Each pass of the Kawase is either a simple linear interpolation between the current data and the previous data (a result of a previous pass or if this is the first pass, the original downsampled scene) or it could be an average of the two. You will have to experiment to see which works best. I find that a bilinear filter of the 4 nearest neighbors works quite well. I normally sample the N,E,S,and W texel neighbors. You can also try an average of these or attempt and average of 8 neighbors (requiring more samplers) or you can try the NW, NE, SW, and SE neighbors.

    Each pass is not a true gaussian blend in the purest sense.

Popular pages Recent additions subscribe to a feed