# Thread: Terrain engine idea

1. ## Terrain engine idea

Ok some of you are going to think I'm crazy on this one and some of you might consider me a genius.

If any of you have done raycasting you know that it is extremely fast when you split the rays into their component parts and then move through the map based on tangents one component at a time.

The inherent problem with terrains is this: too many polies and while this can be solved using quadtrees and other algos I am going to present another possible solution.

My idea is as follows. Create and or pre-compute a large terrain from a heightmap. Sectorize the map by assigning numbers to represent the sectors. Store each mesh in an array that represents each sector. For instance, sector 1 would represent ID3DXMesh MapMeshes[1] and so on.

Create a small tile map much like wolfenstein or the old 2D tile maps. Then to draw the scene, simply raycast through the map based on tangents. When you hit a number, render that mesh to the screen if its visible - this can be tested quite easily using a z buffer. So let's say each mesh was 16x16 cells. You could represent a very large terrain with a minimal 2D map. The x,y and z positions of the meshes are easy to compute. Simply compute each mesh in local space - that is everything is centered around 0,0,0. Then translate the matrix by x*rayworldx,y*rayworldy,z*rayworldz. This would put the mesh in place where it needs to be. Of course more calculation would be needed to compensate for cell size but you get the idea. So in a sense we are still raycasting, but we are rendering meshes that represent the entire mesh.

This technique would save on the number of actual primitives being drawn at one time because each mesh would have a set number of primitives. As well, you could render this extremely fast because only the 'sections' that lie within your view will be rendered. These sections are computed by the ray caster. If you render this front to back and store the tops of each column -that is the screen.y component of each vertex then this would be a fast render. If the y component of the vertex being rendered is above and beyond (on z) the y component of the one previously rendered there - draw it, else skip it. The z buffer is handled by D3D but the trick would be up to you - think about it. You won't see any terrain vertexes that are not higher on the screen than the highest vertex for that column of pixels. This presents more problems because not all vertexes will lie on columns - it would be possible that two vertexes would be connected and thus you would have to find the point at which that line intersects the desired column, but it could be done.

With this method I think it would be possible to render extremely large distances with little trouble and the framerate would be very good.

So what do you guys think?

2. i have no clue what you just said....

3. i understood exactly what you said. raytrace only the tiles that get hit in the mesh. the problem i see is reflections. ie you will not know what gets hit or value changes unless you calculate a compairison value. to compair it with. i may be off on this but there might be somethings outside of the view that might need be changed so that it works correctly. good idea

4. implement it and see if it actually works

5. i think there might be some side-effects you havn't thought of...if it were that easy wouldn't game companies be using the same procedure?
edit: my 800th post!! again!

6. you can't think of every possible problem until you actually DO something, now can you? It's a rhetorical question, don't answer.

7. Well I'm not even sure its possible, but it sounds plausible.

Right now I'm focusing on improving the visual detail and adding water - complete with water caustics using vertex and/or pixel shaders.

Oh and about the remark don't you think that game companies would be using it....

No, I don't. Very very very few companies completely write an engine from scratch, most are bought to speed up the design process. And with the attitude that it has already been done before or that the experts somehow know it all and we could never do something they haven't thought of.....will simply get you nowhere. Games are all about ingenuity and no...everything has not been done yet. The more I code in 3D the more I realize how many different effects, techniques, algos, etc., can be derived. There are so many effects to create and so many graphical boundaries to break that there is no way possible only a select few will do it. Games are where they are today really because of a collective effort between companies, programmers, artists, directors, gamers themselves, and indie developers. The field has come a long way but there is a long way to go and I'm not sure it will ever end. Quite exciting really.

Oh and here is a shot of my hi-res terrain - but it suffers from wrapping which the trained eye will immediately catch.

Coord you know of a way to multisample the texture being applied to each quad?? Perhaps lightmap it or something or alpha blend it with a diffuse greyscale texture?? Close up textures just get to blurry to be of any use.

8. I don't think there's very much new theory to be sought. All of the paradigms that we could possibly take in terms of games have been thought of. The problem is getting stuff to work. The ideas of voxels (which are really just pixels with a depth component, or a 'volumetric pixel') have been around since forever, but there's no hardware that can support it, so the problem is implementation. In case you don't know, worlds built out of voxels take vast amounts of data. A given quake2 map converted to a voxel renderer takes over 3 GB of data (3-5MB in .bsp format before).

The direction we are eventually headed in has already been laid out, which is true, realtime, omni directional ray tracing which perfectly simulates shooting photons in an infinite number of directions and interacts perfectly with surfaces. Everything will look 'real', but what we have got right now suffices.

EDIT:
Coord you know of a way to multisample the texture being applied to each quad?? Perhaps lightmap it or something or alpha blend it with a diffuse greyscale texture?? Close up textures just get to blurry to be of any use.
I know how to do lightmapping, but again it's only in the context of OpenGL. I don't know if 'multi sampling' and multi texturing are the same thing. It isn't that hard to do once you have the texture information, at least with OpenGL. I have no clue what you guys do with direct doodle.

EDIT1: wait, if you're talking about swapping textures so that it looks nice up close but gradually puts a crappier copy as it gets farther away, then look into mip mapping. And once again, Mr. Helpful over here only knows how to do it with OpenGL.

EDIT2:
as a side note, I think commanche, a helipcopter fighter game, was one of the only commercial games to pull off using a voxel based renderer for landscape

9. Originally Posted by Silvercord
but there's no hardware that can support it, so the problem is implementation.
hmmmmmm that's really not so true anymore. 3d monitors are really taking off here and should be commercially available (fairly cheaply) in a matter of years. *dredges up some links*

http://www.actuality-systems.com/ //this one has specs mentioning how many voxels it can display (i think)

http://www.seereal.com/default.en.htm //uses opponent processing to make the object appear 3d, so im not sure it can really display voxels... maybe in cheater form tho

http://www.dti3d.com/ //another opponent processing one. not too expensive tho.

hmmmmmm there's a much better example of how cool 3d displays are, but i can't remember the website. they use a special projector that projects on a distilled water jetstream. the resolution is amazing and the size nearly limitless. you can also walk thru it

sorry, just realized my post is pretty off topic. someone will find it interesting tho i hope.

10. NO NO NO, there's no hardware that can support it because IT TAKES GIGABYTES OF DATA TO STORE VOXEL INFORMATION OF ANY SIZABLE WORLD. Think about it, if an average Quake2 map needs 3 GB of data for an average sized level, and there are 15 levels, then you're taxing 45GB JUST FOR THE MAP DATA ALONE.

The other approach is storing it in mathematical equations, calculating the voxels upon runtime, and then running. This yields an executable size of likable size, but from personal experience (from running other people's demos, i don't program with voxels) takes approximately 10 minutes to load!

Tou don't need a '3d monitor' to render a voxel. in fact, when you come right down to it, aren't all voxel renderers just different in the way they treat data? I mean when you ultimately draw the voxels, you're still just projecting and rasterizing the polygons that make up the cube. The advantages with treating the data different is that voxels lead for perfect frustum and occlusion culling as far as I know (so you know longer fuddle with BSPs and leafs and garbage).

11. Originally Posted by Silvercord
NO NO NO, there's no hardware that can support it because IT TAKES GIGABYTES OF DATA TO STORE VOXEL INFORMATION OF ANY SIZABLE WORLD. Think about it, if an average Quake2 map needs 3 GB of data for an average sized level, and there are 15 levels, then you're taxing 45GB JUST FOR THE MAP DATA ALONE.
Are you saying there's no hardware that can support 45GB of data? i beg to differ

12. What if you have 5 games with 30 maps apiece, and each map has 30% more data? (which is a more likely scenairo than the old quake2). then you're looking at :

5 games * 30 maps * (5 * 1.3) gb per map = 975GB

that's JUST games, and not only that, but it's JUST voxel data. That doesn't include music, movies, your OPERATING SYSTEM, or any other junk

of course that's all still feasible, as in the technology exists, but nobody would pay for it

EDIT:
heres the site, I fluffed the data a bit, the map was only 3 gb after converted to voxel. that changes what i said above to:

5 games * 30 maps ( 3 * 1.3) gb per map = 585gb. but it's a moot point

http://unrealities.com/web/johncchat.html

13. Originally Posted by Silvercord
an average Quake2 map needs 3 GB of data for an average sized level, and there are 15 levels, then you're taxing 45GB JUST FOR THE MAP DATA ALONE.
hmmmm are you sure about this? i mean this isn't my area of expertice but that seems like a ridiculous amount of data. i mean i have quake 3... w/ like oh... 100 maps and it's under 1 gig. i mean i know the data is compressed and all... but...

anyway tho, i posted those things b/c they were interesting, not to start a flame war, so sorry. 3d displays are the future tho and they'll be here pretty quick.

ps. i can't look at that link at school... damn filter, but ill be sure to look into it later

14. The reason that quake3 doesn't take up that much data is that its not comprised of voxel data. The 3GB for a single map is what John Carmack said the size of the quake2 map was when he converted it into voxel data. Go to the link and find the word 'voxel' if you don't believe me. A quake3 BSP is just basically a zip file that stores mesh data, not voxel data. Obviously a quake3 BSP stores a bit more than just that (i.e collision detection hulls) but you get the idea.

15. Hang on everyone....whoa!!!!

First off I'm very experienced and very knowledgeable about voxel data and terrains. Voxel maps do not take up that much space. Voxel maps are simply heightmaps and colormaps. The heightmap is usually greyscale and inside of the renderer you scale that to a 16-bit WORD which would give you a nice range of values.

The color map is an RGBA map but it could be in any format. It is no different than storing textures for a game. Voxel maps also can be created using several tiling tricks much like the idea I had above. The idea is that you tile the terrain and/or repeat it - thus you have very large view distances at very good framerates. Much of the voxel renderer code is highly optimized and uses many many lookup tables. There were several games that used voxels:

DOOM - yes doom is essentially a voxel renderer because with the raycasting you also had heightmapping so it fits into the category

Comanche helicopter series - except for the modern 3D ones - version 4 and up

Delta Force 1 and 2 - 6 degrees of freedom and in 2 you had detail texturing as well as smaller heightmaps overlaid on patches of the larger map - thus grass could actually be walked through and look reasonably believable. Six degrees of freedom and very good terrains - extremely well coded. Also had water effects and other 3D items blended with the terrain - very good engines for voxels. Delta Force 2 did suffer with framerates but 1 was extremely fast.

Armored Fist 1, 2, and 3? - 1 used voxels for everything - even the tanks which did not look good. 2 used polies for the tanks and voxels for the terrain. This is the same engine that was used in Delta Force 1 and 2 - VoxelSpace 1, 2, and 3 (32-bit color support and MMX support) by NovaLogic

Outcast - a bilinear interpolated or smoothed voxel renderer - although I much prefer Delta Force 1 and 2 - Outcast simply did not look good enough and the framerates were at best tolerable.

Command and Conquer Tiberian Sun - yes this also used voxels to render the terrain and certain objects.

Also some of the earlier games used movies of voxel terrains for their intro scenes - the computer at that time was not capable of rendering voxels in real-time.

Voxels are also used in medicine for cat-scans, xrays, and other examinations. These are extremely powerful voxel renderers. This is how the 3D pictures of the human skull and brain were rendered and textured later. Much can be done with voxels in this field.

Voxels are extremely useful but a lot of rendering tricks need to be applied to them to get them to look good. First of all, when the terrain is very close to the viewer it can quickly degenerate into a blocky mess. The solution is to use a diffuse filter and change the RGB values inside of the cells based on a diffuse map. This will make it appear as though there are more voxels than there really are. Also the height at close ranges should be at worst linearily interpolated, second bilinear interpolated, and best cubic interpolated - but much slower. The colors as well need to be interpolated for good effects. The height map should be extremely smooth - apply the smooth filter to the bitmap many many times. This will result in very smooth terrains. Smaller patches of 'voxel noise' can be applied to patches of cells or individual cells. Modulate these with the height values found on the map and you can simulate grass, etc. Detail texturing can be accomplished by rendering textures to the cells instead of simply using the color from the larger color map for the cell. Or even better modulate the alpha of the current color of the cell with the current terrain texture patch to create a very good looking landscape. As well you can use an offset map to offset the pixels being plotted - this will cause the defined blocks of color data to look fuzzy since they will share some color with their neighboring cells. Smooth this and it will look very good. For an example of this go to www.flipcode.com and look up voxel terrains. You will come across a very good voxel renderer that does implement this.
Another method is to voxel texture a triangle mesh. You are using polies, but you render with voxels - called voxel texturing.

Note that most of these techniques also apply to polygon terrains or meshes of triangles. No matter what rendering you use, the detail falls apart the closer you get - so you must add detail. And no, mip-mapping won't fix this because it is the linear, bi-linear, tri-linear, or anisotropic filtering that is causing the texture to be blurry. We want it to be blurry so the pixels cannot be seen, but at close range it looks like a blurred mess - so you must blend in another more detailed texture - that has also been filtered, with the current texture.

If you don't understand this take a texture and enlarge it in a paint program. Notice the squares of color you get. Now add uniform perlin noise to the texture. The perlin noise acts on all pixels, not just the squares so we can add detail to the enlarged texture. Now apply a smooth filter to it or soften filter and you will see that you still retain the original image, but you have more detail.

There are so many tricks and techniques...far too many to mention here. Also...raytracing as is looks way to fake. It's TOO perfect. Much needs to be done to add what I would call 'grit' to the picture. Everything in real life is not so color perfect as it is in a ray tracer. Yes in a perfect environment light would reflect perfectly off of objects and thus give a very shiny sheen look to things....but alas this does not happen. There have been many advancements made but there is much more to be done.

Voxels overall give very good details but are extremely expensive because no video cards actively or natively support their algorithms. They are based on firing a ray out into space thus ray casting and no video cards are optimized for this. Current GPUs are optimized for pushing polies - but voxels can produce similar images with less bandwith. There is much to be done in this field and hopefully it will take off.

But I'm not working on voxels now, I'm working on actual 3D meshes in Direct3D so I'm not sure where this convo got off track at. I was making the point that many of the texture tricks and techniques apply to all types of terrains and objects no matter what their rendering scheme is.

Also heightmaps and color maps are best stored in quadtrees not BSPs. Here is some code to show you why.

Code:
```int CreateTerrain(int depth,int left,int top,int right,int bottom)
{
if (depth<1) return;
color=<some_color_algo>;
DrawBox(left,top,right,bottom,color);

midx=(left+right)>>1;
midy=(top+bottom)>>1;

//Upper left
CreateTerrain(depth-1,left,top,midx,midy);
//Upper right
CreateTerrain(depth-1,midx,top,right,midy);
//Lower left
CreateTerrain(depth-1,left,midy,midx,bottom);
//Lower right
CreateTerrain(depth-1,midx,midy,right,bottom);

return(0);
}```
Pause this code after each iteration and you will see how it renders. Just imagine if you knew where to start based on the player's coords. It would simply be a matter of starting at that point and rendering all the nodes of the tree from there.

Popular pages Recent additions