Thread: Frame Rate Not Constant

  1. #1
    Registered User
    Join Date
    Jan 2012
    Posts
    17

    Frame Rate Not Constant

    This thread sort of belongs in game programming but it's mostly a win32 problem so I put it here.

    I don't want to add my code unless I have to because it's alot to copy any paste; 100+ lines of code.

    Okay so I'm making a 2D game this is how it works:
    - Bitmaps are bitblt-ed to the window using it's device context.
    - My game loop uses PeekMessage for the windows message pump.
    - I use the timeGetTime() function to throttle the frame rate to 60 FPS.

    The problem is this:
    For some strange reason, every so often when I start the game the frame rate gets REALLY slow. I can't understand why? No other processes are running when I open the game. I know games can run using the windows GDI without any problem but for some reason my game gets super slow sometimes.

    I've tried running it on other computers and the same thing happens. Every so often it will just run super slow for no reason at all. Am I causing some kind of memory leak or something? Sooo lost. I should have stuck with Console applications... :-(

  2. #2
    Novice
    Join Date
    Jul 2009
    Posts
    568
    Show us how you implemented the framerate throttling. It's a non-trivial piece of code with a lot of impact. If you feel that there's too much code, replace non-essential bits with comments on what's going on but show it to us.
    Disclaimer: This post shows my ignorance at the time of its making. I claim ownership of but not responsibility for all errors in it. Reference at your own peril.

  3. #3
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    Thanks alot for taking interest in my question. Win32 is not easy to learn these days unless you can afford to buy a reference book. I can't find much on the internet at all.

    All my code is plain C because I can't stand OOP.

    This is my game loop:
    START OF LOOP
    Code:
    curTime = timeGetTime();
    frameCount++;
    		
    
    //frame counting stuff
    if(curTime - lastTally > 1000){
    	lastTally = curTime;
    	sprintf(theTitle, "%i fps", frameCount);
    	SetWindowText(theWind, theTitle);
    	frameCount=0;
    }
    
    //doSomething does all the game math.
    //It's repeated 3 times so that the sprites move several
    //times before the redraw
    for(a=0;a<3;a++) doSomething();
    //Draw all renders the sprites on the window. 
    drawAll();
    while(timeGetTime() - curTime < 1000.0f / 60.0f);
    END OF LOOP

    It's very possible that the problem is the way I handle the device contexts. The code is pretty self explainatory. The rendering code is below:

    Code:
    void drawAll(){
    	HDC hdc = GetDC(theWind);
    
    
    	HDC hdcBuffer = CreateCompatibleDC(hdc);
    	HBITMAP hbmBuffer = CreateCompatibleBitmap(hdc, 640, 480);
    	HBITMAP hbmOldBuffer = SelectObject(hdcBuffer, hbmBuffer);
    
    
    	HDC hdcMem = CreateCompatibleDC(hdc);
    	HBITMAP hbmOld = SelectObject(hdcMem, theMap);
    	if(showBG) BitBlt(hdcBuffer, 0, 0, 640, 480, hdcMem, 0, 0, SRCCOPY);
    	
    //draw all sprites in the array
    	int a; for(a=0;a<20;a++){
    		if(act[a].alive==0) continue;
    		SelectObject(hdcMem, theMask);
    		BitBlt(hdcBuffer, act[a].left, act[a].top, act[a].selW, act[a].selH, hdcMem, act[a].selX, act[a].selY, SRCAND);
    		SelectObject(hdcMem, theImg);
    		BitBlt(hdcBuffer, act[a].left, act[a].top, act[a].selW, act[a].selH, hdcMem, act[a].selX, act[a].selY, SRCPAINT);
    	
    	}
    
    
    	SelectObject(hdcMem, hbmOld);
    	DeleteDC(hdcMem);
    
    
    	BitBlt(hdc, 0, 0, 640, 480, hdcBuffer, 0, 0, SRCCOPY);
    
    
    	ReleaseDC(theWind, hdc);
    
    
    	SelectObject(hdcBuffer, hbmOldBuffer);
    	DeleteObject(hbmBuffer);
    	DeleteDC(hdcBuffer);
    }
    There isn't anything happening other than the game loop and the rendering. Keyboard input is done with the windows message pump. Code compiled with Mingw.

    Any help is appreciated.

  4. #4
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    By the By, the game loop is nested in this kind of message pump:

    Code:
    PeekMessage( &msg, NULL, 0, 0, PM_NOREMOVE);
    
    
    	while (msg.message!=WM_QUIT) {
    		if (PeekMessage( &msg, NULL, 0, 0, PM_REMOVE)) {
    			TranslateMessage(&msg);
    			DispatchMessage(&msg);
    		} else {
    //above game loop code goes here
    }

  5. #5
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    My first suggestion is:

    Code:
    while (gameActive)
    {
       while(PeekMessage(&msg,0,0,0,PM_REMOVE))
       {
           TranslateMessage(&msg);
           DispatchMessage();
       }
    
       //Update and render here
    }
    Your code will check for one message and if it finds one removes it, translates it, and dispatches it. However the else is going to stop the game code from executing every time there is a message. Since you are using if() instead of while() the message queue is not being emptied. If there are a lot of messages (which there probably will be when you startup) it could take some time to process. Also since you either process message OR update the game instead of process messages AND update the game this will cause frames to be skipped.

    Second suggestion is to use QueryPerformanceCounter() instead of timeGetTime(). Also timeGetTime()'s default period on WinXP (and I assume all OS after it) can be as high as 5 ms. To alter this you can call timeBeginPeriod() and timeEndPeriod() and get it down to 1 ms (supposedly). However, I seriously doubt that timeGetTime() will actually get down to a precision of 1 ms given what it does and I believe in another thread one of the members proved it in fact does not have a 1ms resolution even when set to it.

    Third suggestion on limiting frame rate is to simply set the timeDelta you calculate to the desired rate if its higher than your max frame rate. Since 1.0f / 60.0f is 16.666~7 if the incoming timeDelta is higher than this you simply clamp it to 16.6666~7 and all is well. For 30 FPS you would clamp to 1.0f / 30.0f and so on.

    Do not stop your system from updating and/or rendering to limit the frame rate. Rather let the system continue to render but change the timeDelta that Update() uses to update the game. If the Update() only moves objects at a max rate of 16.666~7 ms then regardless of how many times you render your game will still appear to run at 60 FPS b/c even if it tried to render at 120 FPS, half of the renders are rendering the exact same frame.

    The final problem is that you are in Windows and using GDI. The frame rate is going to be wonky b/c you are not in fullscreen. I do not pretend to know what happens when an app goes full screen under the hood (as in the case of D3D or OGL) but the message loop acts very differently when in full screen although it appears on the surface to behave the same. It probably has something to do with application priority and the driver elevating that priority somehow so the game can run with minimal interference from the OS.

    Code for QueryPerformanceCounter:

    Code:
    LARGE_INTEGER perfFreq;
    LARGE_INTEGER currentTime;
    LARGE_INTEGER prevTime;
    
    // Query the performance frequency
    QueryPerformanceFrequency(&perfFreq);
    
    // Compute frequency ratio
    float ratio = 1.0f / perfFreq.QuadPart;
    
    // Start the timing at previous time for the first frame of the game
    QueryPerformanceCounter(&prevTime);
    while (active)
    {
       QueryPerformanceCounter(&currentTime);
       float timeDelta = (currentTime.QuadPart - prevTime.QuadPart) * ratio;
    
       //Process messages here (snipped)
       ...
       ...
    
       // Main game loop
       Game.Update(timeDelta);
       Game.Render();
       
       prevTime = currentTime;
    }
    Last edited by VirtualAce; 01-13-2012 at 04:01 PM.

  6. #6
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    That was very helpful! I made the necessary changes and I'm still having the speed problem.

    I decided to try commenting-out the the throttling all together to see what happened. It's really strange, every time I run the program it maxes out at different frame rates. For example, sometimes it tops out at 200 fps, then other times 350 fps, and then sometimes it just dies and goes down to 30 fps. Like I said, nothing is running in the background either, I keep a clean machine. All the other processes in the task manager are running at 0%. I just don't get it at all.

    Sorry, I'm very partial to the windows GDI. If nobody can help me I'll understand. It's really weird.

    By the way, I tried this on other computers, it makes no difference.

  7. #7
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    Just for the record, I wasn't trying to be sarcastic in that first sentence. I just realized now that I kinda comes off that way.

  8. #8
    Novice
    Join Date
    Jul 2009
    Posts
    568
    Try adding `CS_OWNDC` to your windows calss styles. Maybe that could help.
    Disclaimer: This post shows my ignorance at the time of its making. I claim ownership of but not responsibility for all errors in it. Reference at your own peril.

  9. #9
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Are you calling drawAll every frame? If so you will want to pre-cache those DCs. In fact you probably do not want to be creating anything during the actual rendering if you can avoid it. Pre-cache everything you can and evey type of object you will need so you are not doing any dynamic allocation in the main loop. Delegate all of that to your setup and use smart pointers and the like to handle pointer ownership issues.

    That being said you still may have issues. After all it is GDI and was never intended for high performance graphics but rather feature robustness. I seriously suggest moving to a graphics API b/c it is so nice to work with a graphics system that actually works with you instead of against you.

  10. #10
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    @ msh: I tried adding CS_OWNDC a while back, no difference unfortunately =(

    @ VirtualAce: You're suggestion help alot but I'm still not totally satisfied with it. At times the frame rate gets down to 60 fps. I don't feel comfortable unless I can work with at least 100 fps just in case I want to blit a lot of sprites at a point in the game. I don't want the frame rate to dive when there's 10 extra sprites running around. This really sucks. I'm forced to agree with what you said about using a graphics API. The win32 GDI is good for card games I guess... lol.

    Thanks for the help everyone.

  11. #11
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    Just for the record, is it taboo to GetDC at the beginning of a program and not release it until the very end? Every tutorial gets and releases the DC on every redraw of the screen.

  12. #12
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Wait you said you wanted to clamp the framerate at 60 and now you say it isn't clamping and you want 100? Decide what frame rate you want and clamp to 1.0f / desiredFrameRate. If you are clamping the FPS can never go over that. Instantaneous frame rate can be computed by doing 1.0f / timeDelta where timeDelta is expressed in seconds. Since you are clamping timeDelta to some 1.0f / x where x is the desired frame rate in frames per second then you can never exceed 1.0f / x.


    BTW there is great article on this page about the pre-allocation strategy I have mentioned here and in other places. The problem with game programming is it really sits outside the box of what other programs have to do and the scope of their requirements. As such software engineers are usually forced to approach the entire program from a much different perspective than other engineers on different types of projects. The problems are that a 'real-time' game (note that all games do not fall into this category) must above all else maintain an interactive frame rate at all times. This is not as simple as it sounds. As I just mentioned at 60 FPS your timeDelta is 16.6667. If you multiply that by 2 you are already down to 30 FPS. If the driver is running in vsync mode the situation is much much worse. In vsync the frame rate will ONLY sync to mulitple of the refresh of the monitor. So if you are using a 60Hz display and are getting 59.9999999 FPS you are going to get 30 FPS. If you are getting 29.999999 you are going to get 20 FPS and if you get 19.99999 you are going to get 15 FPS. Look at how drastically the frame rate drops off in vsync:

    60Hz vsync FPS
    • 60
    • 30
    • 20
    • 15
    • 12
    • 10
    • 6
    • 3
    • 2
    • 1

    Anything below 15 is no-man's land and you might as well face it that no one is going to play below 15 FPS without getting frustrated or a serious headache or both.

    I encourage you to read the section on that page about pre-allocation and how it can save you. Note that there are other articles on that page I do not necessarily agree with in full but overall it is pretty good information. Also while new and delete are just fine for any normal C++ application they are absolutely terrible for games. New and delete by game standards are slow, clunky, and cause massive memory fragmentation. So what is the answer? Use them once by creating your own stack-based allocators or pool-based allocators. By all means you should never and I mean never be allocating using new and delete (or even malloc and free in C) inside the main game loop of any real-time game.

    You may think this overkill for smaller games and it is. But it is a good habit to get into and once you build your memory libraries or purchase them from vendors it becomes simple. If you do not want to build your own memory libraries (and I can certainly understand why) then you should only be using new and delete in the setup of your application. Now in your code you are creating objects at the start of the drawing and destroying them at the end. In a normal draw-once and buffer the results type Win32 app this is fine. This is not fine in a game where you are calling this loop literally 60 times a second. So look at your code again:

    Code:
    HDC hdc = GetDC(theWind);      
    HDC hdcBuffer = CreateCompatibleDC(hdc);    
    HBITMAP hbmBuffer = CreateCompatibleBitmap(hdc, 640, 480);    
    //HBITMAP hbmOldBuffer = SelectObject(hdcBuffer, hbmBuffer);      
    HDC hdcMem = CreateCompatibleDC(hdc);
    ....
    You do all this before you draw one pixel. Now I commented out the SelectObject() b/c it must be done which is a huge problem and why you shouldn't be using GDI in the first place. But you have to do it and there is no way around it. Moving on.

    Ok so you are grabbing a DC, creating a new DC from that DC, creating a new bitmap based on the dc, and then creating a memory DC for blitting. I do not know what those functions do under the hood but I know it cannot be good nor fast. Now you are going to call this method 60 times per second which means after 60 frames or in 1 second you will have done the following:

    • 60 calls to GetDC()
    • hdcBuffer - 60 calls to CreateCompatibleDC - 60 creations and destructions of a new DC
    • hbmBuffer - 60 calls to CreateCompatibleBitmap - 60 creations and destructions of a new bitmap
    • hdcMem - 60 calls to CreateCompatibleDC - 60 creations and destructions of a new DC


    So in total you are going to call GetDC() 60 times per second, CreateCompatibleDC() 120 times per second and CreateCompatibleBitmap 60 times per second. See a problem here? Pre-allocate those objects and use SelectObject to select them in and out of the DC. Since the DCs are going to be needed for the lifetime of the object simply move their scope into a class or an object that has the same lifetime as the game and that the render loop can easily access (preferably without a Get() call to grab them). Once the game level is finished you can then cleanup the object holding the DCs which under RAII should clean up the DCs. The memory footprint for your game will be much larger but you have far more memory at this point than you do CPU cycles so it is a good tradeoff.

    Remember in games - pre-allocate everything you can and streamline the game's main loop. By the time you enter the game loop and by that I mean the Update() Render() portion all objects should be allocated and ready for use by various game systems. You should cringe if you are creating new objects regardless of what they are inside of Update() or Render() or any method that is called by those methods.

    Again this is for games. So don't go telling everyone here that VirtualAce said that new and delete were evil and you should not use them. It's all about context and application requirements.
    Last edited by VirtualAce; 01-15-2012 at 02:13 PM.

  13. #13
    Registered User
    Join Date
    Jan 2012
    Posts
    17
    I grasp everything you're saying. It makes perfect sense. I've been able to make major speed improvements by making the graphics buffer a global variable and only create it once.

    To clarify, I want to limit the frame rate to 60 fps in the finished product, but I turn off the throttling in development to gauge how "heavy" the blitting is in different parts of the game. My worry is that if the frame rate is too low WITHOUT throttling then the speed is going to go below 60 fps when there are one too many sprites on a particular part of the level. It makes me think I'm seriously wasting my time.

    I appreciate all this help, I really do.

  14. #14
    train spotter
    Join Date
    Aug 2001
    Location
    near a computer
    Posts
    3,868
    I would also want to remove the SelctObject() calls from the loop in the render method.

    At start up I would create 2 more 'global' scope memDCs to hold 'theImg' and 'themask' (and so not have to SelectObject() these BITMAPs and then BitBlt(), just BitBlt() these 2 new DCs).

    This means you require more memory, but should increase the speed of your drawing.

    Quote Originally Posted by OhPoo View Post
    Just for the record, is it taboo to GetDC at the beginning of a program and not release it until the very end? Every tutorial gets and releases the DC on every redraw of the screen.
    Yes.

    Because once you have created your memDC etc you no longer need the HDC from GetDC (and can release it).

    Like dynamically allocated memory, you should only 'hold' GDI objects for the minimum time required (ie the smallest scope possible).
    "Man alone suffers so excruciatingly in the world that he was compelled to invent laughter."
    Friedrich Nietzsche

    "I spent a lot of my money on booze, birds and fast cars......the rest I squandered."
    George Best

    "If you are going through hell....keep going."
    Winston Churchill

  15. #15
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    It is bad practice normally to call GetDC() and release it later but it all comes down to what you need. I would say if you removed all of the other heavyweight code you could still get away with the GetDC(). See what GDI graphics are causing you to do? You have barely anything on the screen and you are already have to optimize. There is so much more power available on that graphics card in your system just waiting to be utilized.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Set frame rate
    By Gordon in forum Windows Programming
    Replies: 10
    Last Post: 12-14-2007, 02:59 PM
  2. Frame Rate Limiter
    By Nurfina in forum Game Programming
    Replies: 13
    Last Post: 04-14-2007, 01:29 PM
  3. Getting Frame Rate in DirectX
    By Rune Hunter in forum Game Programming
    Replies: 4
    Last Post: 11-02-2005, 02:57 PM
  4. Frame Rate
    By Tommaso in forum Game Programming
    Replies: 6
    Last Post: 04-04-2003, 06:40 PM
  5. Lock Frame Rate??
    By Unregistered in forum Game Programming
    Replies: 1
    Last Post: 06-06-2002, 11:03 PM