Here is a screenshot from my most recent voxel algo.
It has taken quite some time to get it to look this good.
Here is a screenshot from my most recent voxel algo.
It has taken quite some time to get it to look this good.
The map right now is only 100x100 so you can clearly see the repetition in the distance. However with bigger maps and detail textures or environment textures...this would probably look very very good.
It uses bilinear filtering on height and color in the foreground and linear filtering on height and color in the background. It also linear interpolates between voxel color and fog color as the distance from the camera increases.
lookin good. is the voxel implementation done from scratch or does DX have some sort of voxel support? Aslo, is the terrain just coloured (ie, not textured)? That would look much better with some proceedural texture generation based on the height map. good stuff :)
very nice. Could you show us(or explain) your algorythm? Did you get a fps. and how much ram did you use? And also, how do you get bilinear filtering working; what I mean is how do you fetch texel information?
Ok the algo is very simple. Right now it is not even 3D. That is you cannot look up and down very far before it all falls apart, but I'm working on that.
There were 2 major problems with my voxel engines in the past. You couldn't see very far and the up close terrain did not look - detailed enough.
Here is how it is done, or at least how I did it..
You need 3 maps to get this picture. A height map (smoothed many many many times in a paint program), a color map which maps color to each voxel, and what I call a dithering texture that is simply a black and white smoothed texture.
World space vs Local/Model space
Now when you raycast across the world you also must use a cellsize or you will get very ugly stairstep patterns because the change in height is too large over the change in distance. So you need to widen out the grid a bit, say by 4 units as in this picture, and also choose a magnification factor by which to magnify the heights for perspective projection.
Simple Ray casting
The algo is a variation of the one at www.flipcode.com. Ok so you start raycasting at the player's world coordinates. To check where you are in the maps, you simply divide the rays world coords by CELLSIZE. Check to make sure the rays don't extend off of the map (if they do..reset them -> causes map tiling like you see in the photo). This means that for good renders you must have a map that is seamless.
Now when you have your information as to where the rays hit the map, grab the height at that spot. Also grab the color at the same spot in the color map. Then check the distance -> the distance is simply found by cos() or sin() not sqrt(). If the distance is 'close' then you also grab 3 other heights - so you now have:
Now to filter this height value we need to know how far into the cell we are on x and y - or essentially the ray's barycentric coordinates in the cell.
Then use these as the bilinear filtering coeficients to bilinear filter and then magnify the result in prep for projection. This is done for both color and height at maxdistance * .2 or 20% of the total distance.
If the distance is farther out we don't need as much filtering so I simply use h1 and h2 and rayu -> height = LI(h1,h2,rayu)<<MAGFACTOR;
You could also grab three heights and simply do: height=(h1+h2+h3)/3. or use 4 heights if you wish.
Detail texturing and/or dithering
Now this will cause the color to blur out which is good cuz you lose the pixels, but it is bad because you lose clarity. Here is where detail texturing comes in. You can do one of two things here. You can either use a dither texture which tells you how much to lighten or darken the current voxel color or you can alpha blend two colors together - the current voxel color with a texel from a detail texture (could be anything like rocks, sand, etc). The detail texture should be pretty high res or you won't see much difference. To index into the detail texture:
Simple. So if the detail texture is 256x256 and we are at .5, .5 barycentric or right smack in the middle of the cell - we grab a texel from 128x128 and blend that with the current voxel color using the alpha blending algo or a simple linear interpolation. A linear interpolation looks very good. You linear interpolate using distance as the coeficient. Essentially if distance is close...lots of detail because we are selecting more of the detail texture color tha the voxel color. If we are far away we select more from the voxel color than the detail texture. This is what causes the fading out of the detailing and it looks very good.
RGB finalcolor=LI(voxel_color,detail_tex_color,distanc e_coef);
Project onto screen accounting for eyeheight as well. Eyeheight is simply the height where the player is currently standing plus a certain amount so we are not inside the ground or laying down on top of it. This must be magnified by the mag factor as well to match the rest of the terrain values. Then to project:
This is a simple perspective project. We don't need x because we know x since we are simply rendering one or two columns at a time. We only need Y.
Fish eye correction using sin() correction not cos()
Just a simple perspective projection. However ray distance has to be either altered or the ray increments altered. Problem is that ray casting in spherical coordinates causes distortion when you move to rectangular coords. So you must alter the rays a bit to flatten them out. We don't want a fisheye view we want a projection. Here is what I use:
where angle is from -30 to +60 and ray increment = (HFOV/SCREENWIDTH).
Most people say to use cos() correction but I don't like to. Thanks to Peroxide over at www.peroxide.dk for suggesting sin() correction instead.
Ok, so we have where we should start rendering on the screen and we now have the correct color. But we have one more step. We should only render if the current screeny is higher on the screen than the last screeny. So we need to store the last screeny so we can compare the two. The reason for this is that if the current screeny is farther down on the screen than the last....we will be overwriting or overdrawing and we don't want that. Essentially this means that our algorithm is currently casting over a dip in the terrain or a hill. Since the hill is not facing us...we shouldn't see it anyways. This really speeds the algo up. We are simple wave surfing over the tops of the hills and climbing up each hill to reach our new rendering point. Quite fast really. There is absolutely no overdraw in this algorithm. Each pixel will only be set exactly once per frame. Very nice.
There are several ways to do this. You can keep track of the totaldistance or you can simply use ray_distance and compare it with max_distance. The max_distance is computed easily since we know we want the map to extend to the horizon, we know the viewing angle, and we know the start location. So we can calc exactly how far we need to cast to get it to look right. Once we have casted one complete column then we need to increase the angle by our stepsize (HFOV/SCREENWIDTH) and check for >360 degrees, increase our column (the for loop takes care of this).
The sky is simple to do. You know exactly where the render stops on the screen. It will simply be lastscreeny. Check to see that this is not 0 because if it is...we can't see the sky in that column. Here you can either draw a linear interpolated line using whatever color you want or you can do some skymapping from lastscreeny to 0. It looks good either way.
Fog is pretty simple too. I'm using nearly the same equations as in Direct3D for pixel fog, but not quite. All I'm doing is linear interpolating between the fog color and the voxel color.
The coeficient is simlply a value from 0 to 1 indicating how much fog we need. This can be found by fog_coef=ray_dist/fog_dist. More has to be done on this equation if you don't want fog to start at distance of 0. But that's not hard to figure out. Then you simply do this:
RGB final_voxel_color=LI(voxel_color,fog_color,fog_coe f);
Ray information sharing
There are many, many, many tricks you can use for this algo to make it faster. The one that I use is what I call ray information sharing. I'm not sure what the technical name is but I think it is a close cousin to adaptive ray casting. Let's say we have a screen that is 320 pixels in width. This gives us 320/(60/320) angles in one render. The way to reduce this amount is to first select a detail level. All you do is instead of drawing one column at a time...you draw two or three. Increase sx or the screenx by this amount and increase your angle by stepsize*detaillevel. This might cause blockiness if you use a large value. But this effectively decreases the number of angles by 50%, 75%, etc.
Don't cast the entire distance for all columns
Another trick is a variation of the above. In the distance we really don't need a lot of detail because you end up with just lines of pixels...which already look good when squeezed together. So on every other ray or every 4th ray you only cast half the distance. Then on every 4th ray you simply draw a large voxel 4 pixels wide at the current color. Again this might cause a lot of blockiness but it will speed up renders a lot.
You can also increase the step size the farther out you get. This will be a major speed up in the algo at the cost of visual quality. If you are on a flat land area you can simply speed acrossed it very fast and slow down for areas of detail. This causes some odd artifacts but probably no one will even notice them. Just make sure the terrain close to the camera looks good and no one will care.
That's about it. I'm looking to port this to true 3D ray casting so you can look up and down 90 degrees and I'm going to optimize it a bit using adaptive ray casting.
Sorry so long...but the algo is complex.
Finally here are the filtering equations I'm using. They are implemented in asm using floating point instructions. They can also be done with SSE and SSE2 instructions but my CPU does not support them. MMX is not well suited for this because it only supports integer operands which would require using fixed point math, a step that convolutes a lot of the code.
Incidentally this shot is from an algo coded in pure QuickBasic, not C. The C version is not quite done yet. I wanted to get the algo working wihout worrying about the C graphics overhead. I wrote a small QB library to access the SVGA and used it for this render. This is a 640x480 32 bit color render. To get the screen capture I simply wrote the screen info to disk in RAW format and converted it to a JPG in my fav paint program.
I hope to post a working, real/time C DX version of this soon so everyone can walk around the terrain. This terrain will also use bilinear filtering to allow you to walk over it...similar to my polygon terrain I posted before. There is not a lot of difference between polygon terrains and voxel terrains. Both can be optimized highly and both use level of detail tricks as well as detail texturing. Polygon terrains also lose color clarity up close so you must blend in a dither texture or detail texture to get it to look right. The DirectX version will use my framework classes and will be a demo of what you can do with my basic framework classes.
Also there is a lot more detail that can be added like individual grass straws, swampy areas, environment textures, per pixel lighting, etc. Voxels may not be widely supported today....but that does not mean they cannot produce very good and in some cases, better, renders than polygons. What I did not do is per pixel light the terrain. You can pre-compute this based on a ray angle from the voxel to the sun or light source. Then when you grab the color...the shadows and lighting are already there in the bitmap.
That would look awesome. You can also use animated detail textures for the water and alter the height values of the water to make it look like waves. You might even be able to bump map the voxels by altering the normals but I have never seen it done.
Here is a screenshot with a CELLSIZE of 2 and magnification factor of 256.
Try to ignore the repetition and realize that if the map was larger this would look like you could literally see for miles and miles. Note that you can also move to any hill in the distance - the map never ends.
There are some flaws - look at the water you can clearly see where the map starts and ends.
Very nice work! The water also blends up the cliffs, but throwing a more thought out texture on there would fix that I'm sure.
Actually blending up the cliffs will be ok as soon as I place a water texture and water level in the map. But yes you would have to stop blending at some point to get that to work right.
I'm trying to do water and terrain in one pass but I'm having major problems getting it to work. It may require another separate pass.
Here is the color map. (Attached image) I have created a revised color map that does not include water. This map allows me to nicely blend in a water texture with the voxels thus making it look as though you can see some of the terrain through the water. Quite nice.
I've also added some dry cracked ground textures as well as grass textures and also am experimenting with smaller heightmaps to increase the height detail up close. Essentially you could model grass straws with these heightmaps or small undulations and changes in the terrain that are too resolute for the master height map.
I've also figured out how to get more or less detail.
More detail: increase resolution.
Less detail: decrease resolution (values lower than 1.0 create blocky images)
Also have added detail textures but for some reason I'm getting strange borders around my textures. Have added optimization that checks for top of screen condition. If so...that column stops. Really speeds up renders in front of big hills. Have added adaptive ray casting and adaptive distancing.
Adaptive ray casting -> distance based:
Then inside of the loop:
Going to add flat land stepping soon. If the last height grabbed subtracted from the current height is less than a certain LOD level then the ray increments will be boosted since the ray is on flat land. As the height detail increases...the ray steps will decrease.
This may eliminate the need for adaptive distancing because it is essentially the same thing except that the ray step can be changed according to the detail of the map. So completely flat maps would render extremely fast.
I've also added a second water pass that blends in a water texture according to a water level specified by the programmer. All in all it looks very good and I'm still experimenting with it right now. After I get all the optimizations done my DX version will be created. Hopefully it will look good and be extremely fast.
As you can see the possibilities are endless.
I see a use for raycasting, but not for what they say, they said that it is useful for games like wolfensteing 3D, (where walls are 90 degrees from ground), but Why not just load it into a world with verticies? I do see a use for it with something like heightmapping though.
for your heightmap, what do you mean by smoothed? So a cellsize is world coordinates that the user can see divided by the number of pixels of the image? How do you move by an angle in a pciture? Like say you "shot" a ray out until it hit a "wall" at a 25 degree angle? (could you show me the link to raycasting?) Why not use sqrt() to find the distance? What other three heights would you graab when it was close? What is barycentric mean in this case?
Terrain map resolution
Each grid in a terrain or map has a definite resolution of 1 in memory. This won't do because if you have a map that is 256x256 the max you will ever get out of it is 256x256 units. So you need to enlarge the grid in prep for projection. Also by setting the cellsize to 1, you effectively nullify any interpolation that you do since you cannot interpolate across 0 distance.
Non orthogonal/orthogonal walls
Raycasting can be used for orthogonal or non-orthogonal walls. For non-orthogonal walls you must check intersections against line endpoints which represent your walls. Each wall has a start, and end, a height, and a texture. To figure out the intersection is quite simple:
line 1 - y=mx+b
line 2 - y2=m2x2+b2
Assuming we are always in a positive quadrant we can set the second b2 to 0 since it will never have a y intercept. Set these equations equal to each other and solve for the unknowns. Older games like Dark Forces used this type of raycasting as does REX3D, a home-built non-orthogonal raycasting engine.
Now to move by an angle in a picture you must remember that you are casting rays out over a grid. Each grid point has a height and a color value (in 2 sep. maps). My algo draws the screen in vertical columns from left to right and has an HFOV of 60 degrees. So to find the ray increment for your screen res you simply do HFOV/ScreenWidth. For example with a 60 degree field of view and a horizontal res of 320, your angle increment is .1875. When you divide 360 by .1875 then you have a maximum of 1920 angles. These can be pre-computed so that cos and sin boil down to a simple table look up. It is faster than computing sin and cos regardless of cache misses....I've done this a million times and its always faster with look ups.
A barycentric coordinate is essentially coordinates relative to the polygon we are in. Since we do not use polygons here, it is a representation of the distance the ray has traveled into the cell.
Let's say I have a cellsize of 5 and my ray is at 249,129 in world space. This would be 249/cellsize, 129/cellsize in the actual map.
So we take the values from the cell at 49, 25 (not 50,26). Then to find the barycentric coordinates we simply find the floatint point value between 0 and 1 or a value between 49 and 50, and 25 and 26.
You can use floor or another function to do this. I normally just do this:
So we are at .80, .80 in barycentric coordinates in cell 49,25. In other words we are 80% deep into the cell on x and y for cell 49,25.
Take the four corner values for color and height and interpolate using the barycentric coords we just computed. This essentially draws a straight line from the top left corner of the cell to the .80,.80.
Barycentric coords here are very similar to Direct3D texture coordinates. They range from 0 to 1, but unlike Direct3D coords, will never be greater than 1.0.
Essentially they are barycentric coordinates because they journey through the texture space of the cell. This is also the way that Wolfenstein computed where in the texture the ray hit the wall so it knew where to start drawing in the texture for the wall.
The heights/color that you grab for bilinear filtering are:
v1=(x,y) - use this for linear interpolation on x
v2=(x+1,y) - use this for linear interpolation on x
v3=(x,y+1) - use this for linear interpolation on y
v4=(x+1,y+1) - use this for linear interpolation on y
Then use your barycentric coords to interpolate:
Bilinear is just two linear interpolations. We interpolate on x and y.
The reason for not using square root for distance is that it is too slow. We know the angle and we know the change in x and y.
if (angle!=90 && angle!=270)
} else distance=raydiffy/sin(angle);
if (angle!=0 && angle!=180)
} else distance=raydiffx/cos(angle);
for (int i=0;i<ScreenWidth<<1;i++)
for (int i=ScreenWidth<<1;i<ScreenWidth;i++)
Note that the pre-computing of sin_correction would be done during init phase. Also note that we know when cos(angle) and sin(angle) will be 0 so no need to actually check using sin() and cos().
At 0 -> sin(0)=0
At 180 -> sin(180)=0
At 90 -> cos(90)=0
At 270 -> cos(270)=0
Since we can't divide by zero we must check for these angles.
You should use raydiffx if it is larger than raydiffy and vice versa for more accurate computations.
Also all angles can be precomputed as well. This makes each angle calculation a simple lookup into a table. Also distance can be pre-computed as well but I won't go into it here.
Basically if you optimize everything you can get some very good results.
Perhaps we should stick this on another board so that others can learn about raycasting ?
Also if you have ever played an FPS how do you think they track the bullets???? Raycasting. Some physics engine use rays to track collisions, etc. It is very useful to know.
'Smoothed' means using the soften or blur filter in a paint program. It doesn't matter how many times because the perspective projection and the magnification factor will bring out the slight changes in height. But what it does allow us to do is use a smaller cellsize like 2 or 4 and yet still get good images even when we are not bilinear interpolating the heights. If you don't smooth the map, the non-bilinear filtered heights look rather stair-stepped and jagged. If you use too large of a cellsize to correct this, then you cannot cast nearly as far or nearly as fast.
It is always a trade-off between viewing distance and visual quality, just as in polygon graphics.
I'm not against polygon graphics and I've posted a terrain proggie here that uses polies. However, I'm extremely impressed at how natural voxel terrains look both up close and far away. Nearly ever polygon terrain I've seen or rendered looks so jagged and unnatural the closer you get to the camera. One huge problem with blending 3D polies with voxels is that the polies stick out really bad. This is because they are being used against a very soft looking background and yet they themselves are extremely jagged in nature. If you have ever played Delta Force 1 or 2, you can literally see the dudes from miles away with no scope. They stick out like sore thumbs.
The solution to this is to enable anti-aliasing on the polies..specifically the outer edges. This would smooth them right into the terrain and would look very good. Framerate would not be affected too much because most of the screen is filled with voxels who use their own graphics pipeline. If you can get good results from your games that use all polies and FSAA then this idea would literally scream on your systems.
Perhaps its not a matter of polies or voxels...but both. Someday.....maybe.