# Thread: Backface Culling the Complex Way

1. ## Backface Culling the Complex Way

Hello,

Following on from getting my simplistic software 3D renderer up and running, I would like to implement backface culling to limit what polygons have to be rendered.

I understand that essentially you calculate the surface normal (dot product) of each polygon and discard if it is pointing away from the camera/eye. However, I can also see that there are two possible surface normals for each planar polygon. These are opposites, so that one will be pointing towards the camera and the other away.

Most culling algorithms establish a convention whereby depending on the order in which the vertices are specified (clockwise or anti-clockwise), the surface normal is selected. This works fine in the majority of cases, especially where you have control over the input polygons.

As the input for my program is a text file, and the person writing it may not be aware of the convention to be used for the correct surface normal to be selected, is there another way of determining this?

I think that in the case of a closed 3D shape, you should be able to determine the interior and exterior surfaces. This would be computationally expensive but would resolve the ambiguity without fuss.

In the case of something specified as a volumeless mesh (not closed)... I'm not sure. I don't think you can apply culling if you can't guarantee a convention?

2. The way Direct3D does it is that if when rendering the order is clockwise and the culling is set to clockwise it culls them out and vice versa. It is fairly straightforward to determine if one vertex is clockwise from another vertex or if it is counter-clockwise. Older more expensive methods used to take the dot product of the normal of the triangle in question and the camera look vector. If the sign of the dot product showed that the vectors were facing away from each other then the triangle was culled.

3. Think about what you want to accomplish in a logical manner. Essentially what you are asking is the following: I have a random polygon in space, but I have no convention as to what constitutes the 'front' of the polygon, or the 'back' of the polygon, yet I want my program to (magically) determine this for me.

How can it, when you yourself have not established a means to do so?

Code:
```+---------+
\           /
\         /
\       /
\     /
\   /
\ /
*```
I ask you, for the above figure, am I viewing the front or the rear of this polygon? You couldn't possibly tell.

As the input for my program is a text file, and the person writing it may not be aware of the convention to be used for the correct surface normal to be selected, is there another way of determining this?
Ignorantia legis neminem excusat.

You *need* a convention in order to define a front or a backside. If the person editing the files has no clue about the conventions used to generate and use the files, then he should look them up! Remember the law of conservation of garbage: garbage in = garbage out.

I think that in the case of a closed 3D shape, you should be able to determine the interior and exterior surfaces. This would be computationally expensive but would resolve the ambiguity without fuss.
This is still ambiguous: which would be the front, and which would be the backface of this collection of polygons? Those facing the inside, or those facing the outside? I.e. are you viewing from the inside or from the outside of your enclosed space?

4. Originally Posted by SMurf
I think that in the case of a closed 3D shape, you should be able to determine the interior and exterior surfaces. This would be computationally expensive but would resolve the ambiguity without fuss.
I don't think computational intensity would be a real problem here. You would likely only do the computation to determine the front and back of each polygon once (likely at the end of the function loading the text files) after which you put the polygons in memory by your convention of choice, so you don't have to do the whole calculation again.

Originally Posted by SMurf
In the case of something specified as a volumeless mesh (not closed)... I'm not sure.
As MWAAAHAAA said, there will always (in any case) be ambiguity unless you specify a strict convention. That's not to say it's impossible to think of something that will likely do what you intuitively expect.
One solution I can think of in this case is to at least make sure the surface normals are consistent; i.e. if you decide that one polygon is pointing one way, the polygons that share an edge with it point in a direction consistent with that.

Take the attached image for example; this is a simple model consisting of two triangles that share an edge. Vertices b & d are close to the viewer and a & c are further away. Now, for whatever reason you decided triangle dba is facing the viewer (therefore made to be counterclockwise). The line bd is shared between polygons, so now you need to make sure the polygon between vertices b, c and d passes the line bd in opposite direction compared to dba, so it becomes bdc and not dbc.
Now you just need to go through all polygons until all polygons that share an edge point in the same direction. (i.e. all shared edges are passed in opposite direction between polygons)
In order to determine the direction of the first polygon of a connected set, you could for example pick the normal that points away from the origin.

I do think (and this is purely from an outside perspective; I don't know the precise requirements for your project) actually culling backfaces when you're not sure what direction they should be facing is a bit strange. Apparently to whoever is writing the txt files, it's not important which side of each triangle is the front, so they'll likely expect both sides of all triangles to get rendered. Maybe instead of not rendering backsides, you could render them in a different colour?

5. Based on the explanations given, I would need to establish at least one reference polygon to resolve the rest. I don't think that that's an option in this case.

As this isn't a game per se, I think I should be able to get away with doing a reverse painter's algorithm instead of backface culling.

So that would mean:-
• Prune polygons behind camera/beyond visible range;
• Sort polygons nearest Z to farthest Z (might need to split polys for accurate comparison);
• For each polygon, clip against clipping region and draw if anything left. Add what was drawn to the clipping region.

The only thing I could have an issue with there is that the clipping region would be 2D and the polygons 3D. I could extrude the region into 3D...

6. Originally Posted by SMurf
Based on the explanations given, I would need to establish at least one reference polygon to resolve the rest. I don't think that that's an option in this case.
As this isn't a game per se, I think I should be able to get away with doing a reverse painter's algorithm instead of backface culling.
To be sure: unless you're working strictly with convex models, backface culling is not an alternative to depth-checking or painters algorithms. A polygon can face the camera and still be behind another polygon. In other words, you need to make sure not to draw polygons that are further away on top of closer polygons regardless of whether you use backface culling. Backface culling is just an optimization based on the assumption that the check whether a polygon is facing the camera or not is much faster than drawing it (which, given the simplicity of the check, is always almost true) and a significant amount of polygons is actually facing away from the camera (which is true for about half of all polygons assuming your models have backsides)

By the way, why a reverse painter's algorithm?

Originally Posted by SMurf
The only thing I could have an issue with there is that the clipping region would be 2D and the polygons 3D.
Perform your vertex projection before doing the clipping.

7. Originally Posted by Boksha
To be sure: unless you're working strictly with convex models, backface culling is not an alternative to depth-checking or painters algorithms.
I know, but if you've a list of say half a million possible polys to draw, it would be nice to be able to discard a significant number without projecting and clipping ALL of them.
Originally Posted by Boksha
By the way, why a reverse painter's algorithm?
Yahuh. The Painter's algorithm is defined as drawing from the background towards the camera. The problem with that from an efficiency perspective is that all polygons have to be drawn as you overlap the background with the foreground.

The reverse to this is to draw the foreground, keep a note of which parts of the projection plane have already been drawn (the clipping region) and clip out those parts when drawing subsequent polygons. Complexity *may* increase as a result of the clipping process creating new polygons, but if you've got one polygon so close to the camera that it blocks the view of everything else then you only need to draw it and not anything else.
Originally Posted by Boksha
Perform your vertex projection before doing the clipping.
Is that the best way though? To me that seems like getting all the way to the end of the rendering pipeline before potentially discarding a poly, wasting the work done with it.

8. Originally Posted by SMurf
I know, but if you've a list of say half a million possible polys to draw, it would be nice to be able to discard a significant number without projecting and clipping ALL of them.
True, I was mentioning it just in case.

Originally Posted by SMurf
The problem with that from an efficiency perspective is that all polygons have to be drawn as you overlap the background with the foreground.
The question is though, is whether that is actually slower than going through a lot of work for each polygon to determine whether it has to be drawn. I'm not sure how this works out in software rendering, but I'm pretty sure that with rendering hardware using a forward painter's algorithm will be much faster than reverse painter's algorithm based on keeping a complex map of polygons already drawn on the screen. (provided you're doing it on a per-poly basis)

Originally Posted by SMurf
Is that the best way though? To me that seems like getting all the way to the end of the rendering pipeline before potentially discarding a poly, wasting the work done with it.
What other work are you doing on your polygons other than projecting them?
Regardless of that, since what matters is where the polygon ends up on the screen, if you really want to use a reverse painter's algorithm you will have to project them. You could calculate whether a polygon is visible without calling your projection function, but the calculation would include all the calculations done in the projection function.

9. The problem with backface culling in screen space or in view space is that you still transform the vertices down the pipeline just to discard them. However there is no real way to eliminate this. The simplest method is to allow the user to set a winding order for the vertices. Everyone involved in the modelling process understand the desired winding order and builds the models and/or saves them in such a manner as to fit the winding order. If this is not followed there is no way to ensure proper backface culling in all cases. As well there are situations where you want to turn off backface culling such as when rendering skyboxes or anything that has an interior or backside to it that you can see at all times. A good example is a teapot - like the Direct3D teapot. If you turn on backface culling and set the winding order to CCW and the model is ordered correctly you will still get weird results when looking at the spout of the teapot. It is an open cylinder and yet it will cull the backfaces which will cause it to look very strange. Culling certainly must be a render state in order to work in all cases. Culling has nothing to do with the zbuffer or other algorithms that eliminate vertices from being transformed. Backface culled polygons have passed the frustum cull test, alpha blend test or alpha state test, occlusion cull test, etc,. etc. and will be drawn except that they do not face the camera. There is no early out for these types of polygons.

Again the cheapest way to determine back or front face is to allow a winding order to be set and if the polygon does not follow this winding order...skip it and move on. This also offers a lot of flexibility to the user of the system. I would not want to compute dot products and there is no way to precompute dot products b/c every time the camera look vector changes...these dot products do as well.

The only way to stop certain vertices from being transformed is to use BSP, Quad-trees, Oct-trees, etc. but your renderer should not be concerned with that since it is only providing rendering services. The users of your renderer will use those data structures to prevent certain vertices from being rendererd but your renderer should render all vertices it is told to render and based on the render states that have been set. Again this depends on what the role of your renderer is and if it is meant to be a generic software renderer...in which case it should make no assumptions about the items it is rendering other than those it has been told to make (via render states and settings) or it is a specific type of renderer in which case you can make more assumptions about the data being fed to the system and the desired role of the system.

No renderer can account for all situations so it is better to provide a base layer of functionality that 'gets stuff to the screen' and therefore make no assumptions. If some user tells your system to render a billion vertices at 4 FPS then so be it. It is the users job to determine what data they want to send to the renderer and use whatever means necessary to achieve the desired FPS for their specific product. As I understand it you are not building a BSP renderer or a quad-tree renderer you are building a software renderer. So it is pretty simple. Render what is sent to you...nothing more...nothing less. Other middleware code that sits above your renderer can then decide more about the particular use of your system.

I believe attempting to make the rendering faster by using other spatial algorithms will hard-code the system and feel your scope for the renderer you are building is too broad. I would concentrate on providing functionality alone.