Terrain is represented as one mesh object in my system. It's no different than a spaceship, character, rock, or anything else. It's just that the spaceship and rock don't take up (256*256*gridsize) world units. But it is rendered just the same. So let's say we want to move spaceship A to 100,100,100. We just translate the sucker and render.
Same with the terrain. If a terrain is 256x256x32, then we know the middle is 128,128. That corresponds to 128*32,128*32 in world units. So since the terrain is created around 0,0,0 (just like other 3D objects), if I translate a portion of terrain to 0,0,128*32 world units the terrain section will be just north (with north as 0 degrees) of the current section. 128*32,0,0 would be directly west (looking north) or D in the diagram. Now let's say you move into zone 1 and we begin rendering DAB. To do this we simply flag those sections to be rendered. Now let's say we are in Section A, zone 1. Now we take the old D, translate it to the new world center for D based on camera position and render. I need more than one terrain zone because I cannot just take the current section and translate it - it would look like the terrain section was simply jumping in front of the camera. If you looked behind you, you would see empty space. I'm trying to create the 'illusion' of a never-ending world and to accomplish this, I need several sections of terrain.
Let me also clarify that the zones are only for the terrain 'section' the camera is inside of. So the square containing zones 1,2,3 and 4 is always being rendered. The sections must change orientation because the lower right corner of A, does not line up with the upper left corner of the current section.
I'll show you the terrain vertex generation code and index generation code. For the most part this has not been altered too much from the book Beginning Game Programming with DirectX 9.
Code:
void CTerrain::CreateVertices(void)
{
//Setup vars
int iStartX=-m_iWorldWidth2;
int iStartZ=m_iWorldDepth2;
int iEndX=m_iWorldWidth2;
int iEndZ=-m_iWorldDepth2;
//Lock vertex buffer
TerrainVertex *pVerts;
m_pVB->Lock(0,0,(void **)&pVerts,0);
int iNumVerts=0;
//Compute tex coord increments
float fUInc=1.0f/(float)m_iNumCellsPerRow;
float fVInc=1.0f/(float)m_iNumCellsPerCol;
//Row, column and offset counters
int r=0,c=0;
int iOffset=0;
for (int z=iStartZ;z>iEndZ;z-=m_iCellSize)
{
c=0;
for (int x=iStartX;x<iEndX;x+=m_iCellSize)
{
//Create the vertex
pVerts[iNumVerts]=TerrainVertex((float)x,m_puHeightMap[iOffset]*32,(float)z,(float)c*(fUInc*16.0f),
(float)r*(fVInc*16.0f));
//Set detail texture coords and multiply by 128 (detail texture is tiled across 1 quad)
//Giving a very high level of detail
pVerts[iNumVerts].du=(float)r*(fVInc*128.0f);
pVerts[iNumVerts].dv=(float)c*(fUInc*128.0f);
//Set diffuse and specular color - only for fixed function pipeline lighting
pVerts[iNumVerts].Diffuse=D3DXCOLOR(1.0f,1.0f,1.0f,1.0f);
pVerts[iNumVerts].Specular=D3DXCOLOR(0.3f,0.3f,0.3f,0.4f);
iNumVerts++;
c++;
iOffset++;
}
r++;
}
//Unlock vertex buffer so D3D can push vertices to card
m_pVB->Unlock();
}
void CTerrain::CreateIndices(void)
{
//Pointer to index buffer
WORD *pIndices=0;
//Lock index buffer and access it via pIndices
m_pIB->Lock(0,0,(void **)&pIndices,0);
int iOffset=0;
int iVert=0;
for (int i=0;i<m_iNumCellsPerCol;i++)
{
for (int j=0;j<m_iNumCellsPerRow;j++)
{
//First triangle
pIndices[iVert]=i*m_iNumVertsPerRow+j;
pIndices[iVert+1]=i*m_iNumVertsPerRow+j+1;
pIndices[iVert+2]=(i+1)*m_iNumVertsPerRow+j;
//Second triangle
pIndices[iVert+3]=(i+1)*m_iNumVertsPerRow+j;
pIndices[iVert+4]=i*m_iNumVertsPerRow+j+1;
pIndices[iVert+5]=(i+1)*m_iNumVertsPerRow+j+1;
//Each quad is 2 triangles or 6 vertices
iVert+=6;
}
}
m_pIB->Unlock();
}
It's just a simple regular grid with the Y coordinate extracted from the heightmap and multiplied by a scale factor. Width (x) and Depth (z) are incremented by a constant gridsize.
LH coordinate system:
+X is right
+Y is up
+Z is into the screen
Since the terrain grid is one object in this setup it does not allow me test at the face level or vertex level for culling. This would be extremely expensive so the quad tree is designed to fix this. The quad tree breaks the terrain section into many smaller sections. These smaller sections are then given bounding volumes and these volumes are tested against the frustum. The cool thing about the quad tree is this. If you divide once, you get 4 terrain sections, similar to the zones map I provided. If the bounding volume of the section is not partially or completely inside of the frustum, it is rejected and no more tests are performed. If the bounding volume is completely inside the frustum, the entire branch of the tree is rendered w/o further testing. If the bounding volume is completely inside, we know that all of it's children's volumes are also inside since they are contained within the parent volume.
So:
Outside - reject
Partially inside - accept and test children
Completely inside - accept and render children w/o testing
As you can see this system can determine what needs to be rendered in about 2 to 3 recursions. It is very fast. Once more if I add things like rocks and trees they will become a part of the quad tree as well. If the bounding volume for the section where the trees and rocks are is completely out - I don't render or test any of them, etc, etc.
The only thing the quad-tree does not work well with is a dynamic object. When an object is moving across the sections you must continually update which portion of the quad-tree it needs to be in. However, my solution is that dynamic objects are not in the terrain quad tree. They are in a quad tree of sorts, but it is an artificial one in that I perform a quad tree test without having a quad-tree. The first test is the most expensive because it must test every dynamic object in the render list. Each recursion significantly reduces the number of objects.
And besides, how many dynamic objects are going to be on a terrain and in or just beyond visible range at any one time? Probably not too many.
Now if you want to stick buildings on the terrain that you can enter then you cannot just use the quad tree. A fairly simple way is to first render the building using a BSP tree. Then you render the terrain using the quadtree. Since the building will fill the z buffer with correct values when you render the terrain, it will not overwrite the building. Windows in buildings can be accomplished either using stencil buffer or designing the geometry in such a way as to have 'holes' where the windows are. If they are 'holes' or rectangular cutouts then when the building is rendered, these 'windows' will not write to the zbuffer in that area. When you render the terrain, the terrain will pass the z test inside the window area and then it looks as if you can see through the windows. Later on a second pass you can add transparent textures to simulate glass effects. Since the two renderers are separate you can still achieve excellent outside lighting and excellent interior lighting because the shaders are relative to the renderer used, not the objects. Note that also since you render the building first and then the terrain, sun can still create a lens flare through the window. Shadowing can be used inside of the building to simulate light coming in the window, but I've not explored that as of yet.