I'm having problems finding a good scene graph structure that has both spatial information and composition information.
For instance if you have a light source you could make it a node in the graph and then every object that was a child of the node would the have light applied to it.
However this type of graph gives no spatial information about the world positions of the objects. It is optimal and expandable for shaders and effects but not always optimal for rendering. Just because an object has a certain light applied to it does not mean said object is actually visible. So then you traverse the entire node just to see which objects are visible.
The visibility can be solved with a quad-tree (for terrains and such) or a binary tree (for indoor scenes) and is optimal for rendering. However the binary tree is not optimal for grouping state changes, shader changes, light properties, etc.
I'm interested to know how some of you are dealing with the shortcomings of both of these approaches. Do you just accept that a hierarchichal scene graph will not be able to render optimally or that a spatial tree such as a quad tree or BSP will not be optimal for changes in the pipeline?
Mixing these two types of graphs is extremely problematic b/c while they both solve certain issues they don't work well together to solve them cooperatively.
Ok here is an idea ive been thinking about (very briefly so its not exactly thought out in any way):
Wouldnt it then be possible to keep the visibility part (BSP or quad tree or whatever is used) but each object also holds 1 or several pointers to parent lights/shaders/render states. That way you could look up first what is possibly visible through the viewport, and because each object knows what lights shaders and so on that should affect it, you are able to check what lights and shaders and such that should be taken into account when rendering.
My first very brief idea.