>about the question I've asked twice so far
I have already answered your question on culling. Look up above my post about profiling. It is all there.
>about the question I've asked twice so far
I have already answered your question on culling. Look up above my post about profiling. It is all there.
no no no no no! Im not talking about back face culling! I'm talking about frustum culling!! You don't do frustum culling!!
EDIT: you might believe your scene is simple but you are actually sending more polygons to the rendering pipeline than quake3 does, this is because you are trying to draw everything off to the side and behind you that you cannot ever possibly see.
Last edited by Silvercord; 09-24-2003 at 02:55 PM.
to put it simply,
Are you drawing every single polygon no matter if they are on the screen or off the screen?
If you are then thats the problem. I just assumed that you made checks to only draw the poys that could be seen.
well but that's jsut it he's generating this terrain, it isn't generated from a compiler program such as Q3Radiant or Worldcraft or any modelling program that can partition (even Lightwave can partition into a BSP).
As long as the data for the terrain is stored is some sort of relational order then you could write an algoritham that would only draw what could be seen.
*slaps forehead*
good luck
I still promote just using a partitioning program, then the only check you have to do is a potential visibility set and or a frustum check
kain no offense but that's pretty impractical, I'd like to see someone actually do it (i.e have you done it?)
>I'm talking about frustum culling!!
dont got a clue on what frustum culling is...
>you might believe your scene is simple but you are actually sending more polygons to the rendering pipeline than quake3 does
i highly doubt that...my entire scene is only 3600 polygons. A GeForce 2 is able to render 25 million polygons per second...
>I still promote just using a partitioning program
bah...im using all my own code...
I don't know what to think right now. Can you go online and send me the code? i will set you up with my ftp, my aim is
OpenGLProgrammer. It looks like the msvc profiler only profiles entire functions (but i still cannot use the profiler), but what i want to do is profile each part of the drawtexturedhex function and see exactly what part is taking long, then I'll be able to solve the problem, but I'm done guessing at what 'could' be wrong.
Yes I have. Its just some basic trig to calculate if the poly can be seen. In my terrain engine, each polygon of the terrain is represented with an instance of a class, then there is an array of linked lists, one for each texture.
To draw the terrain I go through and dump pointers to all the polys that can be seen from the cameras current position into the linked list depending on which texture they have. Then the program runs through all the linked lists, binding a texture once per list, it draws a poly from the pointer and at the same time removes that pointer from that list.
Technically they are linked lists, but you could get the same functionality out of a stack.
And before you say doing all that is too slow, remember that mostly your just throwing around pointers and that is very inexpensive. The technique is called a bucket sort if your intrested.
trig? I hope you don't mean calling trig functions...because those aren't exactly fast...and when you come right down to it no matter how much you argue with me there isn't anything more robust than pre compiled potential visibility sets and or frustum culling, period. I mean, frustum tests are just dotproducts, and you'll find that whenever you can narrow a problem down to just dot products it will be much faster than having to make nasty calls to sqrt and the trig functions. I mean obviously there's some amount of sqrting and triging that must be done, but if you are calling trig functions trying to determine real time if each polygon is visibile then it seems like a waste.
Last edited by Silvercord; 09-25-2003 at 10:05 AM.
>>and when you come right down to it no matter how much you argue with me there isn't anything more robust than pre compiled potential visibility sets and or frustum culling<<
correction: yet. Personally, I bet that there IS a better way to do it.
Away.
well i had a long involved reply, but i screwed it up so im gonna try a shorter one, some of this has already been covered i know!
first it seems to me that something this simple would run better than 30 fps in software on an athlon 1800, are you sure your getting HW acceleration,
second, avoid calls to glBind, glEnable/Disable, and you dont need to change the the matrix mode to modelview unless its been changed to something else.
Get backface culling working!!!! make sure all your tri's are drawing either CW or CCW they should all be drawing one or the other, try setting the front face to CW and see if the other polys show.
try using a 32 bpp bitmap, instead, see if that helps.
lastly your only using triangle's right!??!
so dont call glBegin in every call to DrawHexTextured()! only call it when you need to, like when you change textures.
remove the glBegin and glEnd from DrawHexTextured().Code:void HEXMAP_DTP::DrawMap ( void ) { int x; glBindTexture ( GL_TEXTURE_2D, texture [WATER] ); glBegin(); for ( x = 0; x < WaterHex.length(); x++ ) { if ( textureMapEnabled ) WaterHex[x]->DrawHexTextured ( ); else WaterHex[x]->DrawHex(); } glEnd(); glBindTexture ( GL_TEXTURE_2D, texture [GRASS] ); glBegin(); // etc... }
my bet is getting the culling working will be the biggest help!
also what is you texture setup code??
ADVISORY: This users posts are rated CP-MA, for Mature Audiences only.
no-one: thanks for the input. It did not GREATLY increase my frame rate, however, your suggestion increased both textured and non-textured mode by about 2 frames per second.
>also what is you texture setup code??
here is my texture setup code:
Code:enum TEXTURE_TYPE { WATER, GRASS, HILL, TEXT, TEXT_BUMP, NUM_TEXTURES }; GLuint texture [NUM_TEXTURES]; // Storage for textures apvector <apstring> TextureName; void InitTextureNames ( void ) { TextureName.resize ( NUM_TEXTURES ); TextureName[0] = "water.bmp"; TextureName[1] = "grass.bmp"; TextureName[2] = "hill.bmp"; TextureName[3] = "Font.bmp"; TextureName[4] = "Bumps.bmp"; } AUX_RGBImageRec *LoadBMP ( char *Filename ) // Loads A Bitmap Image { FILE *File=NULL; // File Handle if (!Filename) // Make Sure A Filename Was Given { return NULL; // If Not Return NULL } File=fopen(Filename,"r"); // Check To See If The File Exists if (File) // Does The File Exist? { fclose(File); // Close The Handle return auxDIBImageLoad(Filename); // Load The Bitmap And Return A Pointer } return NULL; // If Load Failed Return NULL } int LoadGLTextures() // Load Bitmaps And Convert To Textures { int Status=FALSE; // Status Indicator AUX_RGBImageRec *TextureImage[NUM_TEXTURES]; // Create Storage Space For The Texture memset(TextureImage,0,sizeof(void *)*NUM_TEXTURES); // Set The Pointer To NULL // Load The Bitmap, Check For Errors, If Bitmap's Not Found Quit for ( int x = 0; x < NUM_TEXTURES; x++ ) { if ( TextureImage[x] = LoadBMP( (char *)TextureName[x].c_str() ) ) { Status=TRUE; // Set The Status To TRUE glGenTextures(1, &texture[x]); // Create The Texture // Typical Texture Generation Using Data From The Bitmap glBindTexture(GL_TEXTURE_2D, texture[x]); glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[x]->sizeX, TextureImage[x]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[x]->data); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); } if (TextureImage[x]) // If Texture Exists { if (TextureImage[x]->data) // If Texture Image Exists { free(TextureImage[x]->data); // Free The Texture Image Memory } free(TextureImage[x]); // Free The Image Structure } } return Status; // Return The Status }
well, the only thing i can figure, is your doing something insane each frame or your running totally or partially in software...
just checking on preformance, i had about 4k+ polys 2 textures 512x512x32 with a single light totally unoptimized preliminary draw code, i.e. calls glBind every frame, no back face culling, and it ran around 135 fps, on ad P3 750 with a Radeon 7200(ancient)!!
so, turn off al HW acceleration on the graphics card and see how fast it runs,
in case you don't know it can be done like so,
right click on your desktop->properties->settings tab->advanced->troubleshoot tab->move the silder all the way to the left to turn off all HW accelertion,
just remember to reverse this setting when your done.
ADVISORY: This users posts are rated CP-MA, for Mature Audiences only.
One new development:
I gave my program to a friend of mine to run on his computer. I do not know all the stats of his computer, but I do know this:
It is a laptop with integrated video
His laptop got 120 fps running my engine. (textured)
Here are some other statistics:
My engine running on another friend's computer: 60 fps (textured)
His comp stats: 1.1 Ghz Athlon, 512 MB ATI Rage Pro II+, 128 MB Ram
My engine running on another friend's computer: between 200 and 500 fps (yes those are correct numbers)
His comp stats: 1.8ghz amd, 512mb of pc3200, radeon 9800, nf2 chipset
Another friend: 60 fps
his stats: 2400+ AMD Athlon, other stats unknown
Last edited by DavidP; 09-30-2003 at 05:33 PM.