Thread: some kind of occlusion culling

  1. #1
    Registered User
    Join Date
    Aug 2001
    Posts
    244

    some kind of occlusion culling

    so i have my scene organized in some kind of tree (e.g. kd-tree, oct tree)
    each node has a bounding object assigned to it (e.g. axis aligned cuboid (bounding box))

    now i position the camera somewhere in the scene and want to render the stuff.
    so i first check what nodes could be seen (so which lie in the frustrum), sort them in z-order and start pushing the contents of the boxes to the graphics card.

    the thing now is that if i stand right in front of a wall i would probably render things that are like a kilometer away although they are occluded.

    so is it a good idea to do the following (can it be even done that way?):

    ok, the nodes to be drawn are sorted already (closest first)

    i would say that when selecting nodes against some frustrum, always the nodes that just overlap or are completely contained in the frustrum are chosen. (instead of e.g. chosing all leaf-nodes)

    Code:
    draw the objects in the first visible node (that is the leaf the camera is in)
    
    current_node = next visible node in z-order;
    
    render the front sides of the current node's bounding cuboid (so at most the 3 sides of the bounding box
    that can be seen from the camera position) but  just do depth checking, and count how many pixels would have
    been drawn (but don't draw anything) )
    
    if(0 pixels were drawn)
      skip that node and all children because its already occluded by previous objects
    else (if there are objects in that node) {
      render all objects in that node
      move to the children (if there are any) and repeat the same procedure (thus: render their
      bounding box, and check if anything was drawn)
    }
    
    move on to the next visible node.
    so is this a good idea to do that?
    (is it even possible not to draw something, but do depth-testing, and just count the pixels? i am just about to start learning pixel shaders)
    Last edited by Raven Arkadon; 10-03-2006 at 08:25 AM.
    signature under construction

  2. #2
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    If you're using OpenGL, you can use the occlusion culling extension. There may be more work involved with D3D, but I don't really know.
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  3. #3
    Registered User
    Join Date
    Aug 2001
    Posts
    244
    hm ok ive just read through ARB_occlusion_query and this is bascially exactly what i had in mind.

    but i think id like to reinvent the wheel for that test - cause i think its a good thing to learn shaders with - i mean can't be hard counting the drawn pixels
    signature under construction

  4. #4
    vae victus! skorman00's Avatar
    Join Date
    Nov 2003
    Posts
    594
    That test is doing the same thing the video card would do if you were to draw in front to back order, but at a higher level ( the card would do that per pixel ). I think an occlusion algorithm with a pixel shader is a bit too troublesom, but totally awesome none the less!

  5. #5
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    Might be a little difficult though, since shaders generally don't allow you direct access to the Z buffer contents.
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  6. #6
    Registered User
    Join Date
    Aug 2001
    Posts
    244
    hmmm... so i have read around with shaders - and it seems there is no way of letting a pixel (fragment) shader write to a "variable" that could be read by the application? (so communication between fragment shader and app is one way only)
    or is there some work around for this?
    signature under construction

  7. #7
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    No practical work-around. But theoretically, you could write variables as fragment data per-pixel, and then extract this data from the screen on the CPU.
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  8. #8
    vae victus! skorman00's Avatar
    Join Date
    Nov 2003
    Posts
    594
    can you render to a texture instead of the frame buffer, and access the texture's memory?

  9. #9
    Registered User
    Join Date
    Aug 2001
    Posts
    244
    >> No practical work-around. But theoretically, you could write variables as fragment data per-pixel, and then extract this data from the screen on the CPU.
    >> can you render to a texture instead of the frame buffer, and access the texture's memory?

    thats what i was hopeing to be able to avoid, since i would guess that would really stall the rendering process
    (so setting a texture as a render target, then access the data, set the backbuffer as render target, continue rendering, setting a texture as the render target and so on)

    if just the depth-buffer was accessible through a shader, and shader could communicate with the app via some "global" (per primitive) variable, then life would be so easy


    but if there is no other way than to render to texture or get the depth buffer, how is occlusion culling implemented usually?
    signature under construction

  10. #10
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    can you render to a texture instead of the frame buffer, and access the texture's memory?
    Nope. You would basically be using the screen as a texture, since it's your only output.

    but if there is no other way than to render to texture or get the depth buffer, how is occlusion culling implemented usually?
    Usually with API methods like the occlusion extension in OGL (again, I don't know how it works in D3D).

    Basically, you would precomute the visibility data of each node in your tree system, using OpenGL to do the dirty per-pixel stuff.

    in psuedo code:
    Code:
    setViewToCurrentNode()
    for(eachOtherNode){
      occluded = checkOGLOcclusion
      if(!occluded)
        mustRender
      else
        cantBeSeen //skip
    }
    With out the ability to check occlusion per-pixel, you would have to rely on basic scale+position+rotation information to use with your z-order info....I think.
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  11. #11
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Actually you can access z buffer values. A texture is what? Numbers. A z buffer is what? Numbers? So get the contents of the z buffer and pass this as a texture resource to a pixel shader. The problem is you do not have pre-transformed vertices in the shader so you cannot compare their values to the Z buffer which represents the final x,y position of the vertex in screen space.

    The best way and possibly the only way to do this is to draw front to back. This would be working with the Z buffer algo instead of against it. In fact the only time you want to draw vice versa is for isometric tile games where the view is fixed. In that setting we want to see the distant tiles and the close up tiles - which resolve to 4 vertices. In your situation we want to see the distant vertices if and only if they are not occluded. A simple quicksort based on z values in your render list should do the trick.

    For transparency issues you might want to see if your card supports alpha-testing. If it does, you won't have to sort your billboards.

    So draw the scene front to back and of course turn culling on. This coupled with a BSP render or tree-based renderer should more than give you enough frames.

  12. #12
    The Right Honourable psychopath's Avatar
    Join Date
    Mar 2004
    Location
    Where circles begin.
    Posts
    1,071
    Quote Originally Posted by Bubba
    So get the contents of the z buffer and pass this as a texture resource to a pixel shader
    But there's no way (that I know about) to get the z buffer contents to be passed as a texture. Or do you mean things like vertex z-values, etc?
    M.Eng Computer Engineering Candidate
    B.Sc Computer Science

    Robotics and graphics enthusiast.

  13. #13
    Registered User
    Join Date
    Aug 2001
    Posts
    244
    well of course i could read the whole z-buffer and pass it back in as a texture. but then i have to copy around the zbuffer all the time - right?
    signature under construction

  14. #14
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    There is a reason you don't get access to the Z buffer. You don't need it. Why would you test anything against a 2D zbuffer when you don't have the correct 2D screen space coords for the 3D vertices? Are you going to transform them to screen space and also have the T&L hardware do it as well? So you would be transforming vertices twice just to gain occlusion functionality.

    There is a better way.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Solution to earlier quad-tree culling
    By VirtualAce in forum Game Programming
    Replies: 3
    Last Post: 08-10-2006, 08:04 PM
  2. Solution to culling issues?
    By VirtualAce in forum Game Programming
    Replies: 4
    Last Post: 03-14-2006, 06:59 PM
  3. What kind of job...
    By Discolemonade in forum A Brief History of Cprogramming.com
    Replies: 5
    Last Post: 08-15-2005, 08:23 PM
  4. Replies: 11
    Last Post: 03-25-2003, 05:13 PM
  5. Backface Culling in Lesson10 NeheGL Tutorials
    By Zeeshan in forum Game Programming
    Replies: 2
    Last Post: 09-14-2001, 03:06 AM