
zbuffer accuracy
i've begun reading the opengl programming guide and came to a point where the book mentioned that values closer to the near clipping plane have greater depth accuracy than those nearer the far clipping plane (with perpspective projections). the reasoning behind this was because during the perspective divide the z values were scaled nonlinearly. i looked online a little further into the subject, and what i figured is that the zvalues are scaled in such a way because that's the only matrix transformation that will produce the desired change of the perspective canonical view volume to parallel canonical view volume (i don't know any linear algebra except how to multiply matrices so i might be wrong). my question is then, why must we HAVE to use a matrix tranformation. why can't we just alter the x, y (w?) values and leave the z value unaltered?

I'm not sure I understand your question. But we have to alter the Z value when rotating an object. Suppose you had a cube, and rotated it over the Y axis until what had been the back was the front. The z values obviously would have to be transformed. If they weren't, everything would be all messed up.
I also don't know why the Zbuffer would lose accuracy with distance... I thought it was storing a Zcoordinate for each pixel of the screen. That would only become a problem when you got to the point where one pixel is covering several zcoordinates.

i'm referring only to projection.
edit: if the zbuffer doesn't have enough bits per pixel and the distance between the near and far planes is relatively large then zvals far away from the viewpoint but close to each other (after transformation) might map to the same depth value in the zbuffer, leading to anomlies. this is because the new zvalue is z' = a  b/z (a,b >0). as you can see, for values further away, the diff between b/z is small, where as for closer vals it's large.

To answer your question I have no clue why we must use matrix multiplication to perform these various projections. It's just how things are done behind the scenes and there's no way to change it.
I do think matrices suck for programming, because putting equations into their equivalent matrix form generally takes more calculations, and more calculations means slower frame rendering and more inaccuracies in floating points.
take these two functions which are mathematically equivalent (if you don't believe me use your ti 83) and pass in 9.81...they should both yield 32, but the first actually returns 30
bad:
Code:
inline float METERTOFEET(float meter)
{
meter*=(((8+(1/3))*(1/2.54))); //works but too inaccurate
}
good!
Code:
inline float METERTOFEET(float meter)
{
meter*=100;
meter/=2.54;
meter/=12;
return meter;
}

Why not just multiply meters by 3.280839895 Silvercord? That will work just fine...

Thats much less accurate than doing those steps, because of precision. It's more precise to have the PC calculate 1250/381 than to enter 3.280839895.
Code:
#include <iostream>
using namespace std;
int main( void )
{
if( 3.280839895 == ( 1250.0/381.0 ) ) cout<<"Huzzah"<<endl;
return 0;
}
This outputs nothing.

it all illustrates the same point. I did the second way the way I did because it's easier for me to read, even though if you want to be super technical it's less efficient because of the added calculations :rolleyes:
I was proving that matrices are bad because the added calculations introduce error, and in rendering terms z fighting!

XSquared  Try casting both of those values into a float and outputting the values. Silvercord was returning a float from his function. Floats are only accurate for seven decimal places, and after that, it's gibberish. The values should be the same for at least the first seven decimal points, and you don't need more than that anyway :p

to say that a matrix is a bad calculator might be a little off. matrix multiplication is very general (allowing for all sorts of transformations on points) and because of its nature, it is ideal for hardware implementation since most of the computing can be done in parallel. trying to implement very specific transformations in hardware would be much more tedious and expensive (and you probably wouldn't gain much out of it). so matrix multiplication is a good thing for computer graphics.s following this reasoning, maybe the matrix transformation used for perspective projection is simply because it would make things unneccarily complicated to provide for specific projections in hardware. still, i'm rather dubious. i think the chosen transformation has more to do with preserving lines, planes, and what not.

I think what he is talking about is using WBuffer's instead of ZBuffers. WBuffers give you more accuracy towards the far plane by storing 1/z (w) instead of z. WBuffers are deprecated and rarely supported in hardware. Stick to your ZBuffers.

Depth buffer precision is affected by the values specified for zNear and zFar. The greater the ratio of zFar to zNear is, the less effective the depth buffer will be at distinguishing between surfaces that are near each other. If
r = zFar / zNear
roughly log2r bits of depth buffer precision are lost. Because r approaches infinity as zNear approaches 0, zNear must never be set to 0. :eek:
from here.