You should be looking at
timeKillEvent
TimeProc
timeSetEvent
functions
You should be looking at
timeKillEvent
TimeProc
timeSetEvent
functions
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Is timeSetEvent actually accurate to the clock? I was just wondering if it will call back exactly every 15ms or so if you specify so. I can't imagine it will... It will an approximation, as close as can be.
The docs are very vague. Not to mention if the first callback occurs on 17 ms, will the second happen on 30 ms or 32 ms?
MMTimers call the callback on the different thread, so it is as accurate as thread switching mechanism allows. You cannot get something more accurate on Windows
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Yes, I understood as much, but the question is if it will try to synchronize to compensate for this. If it takes 17 ms for the first call, will the second occur on 30 ms or 32 ms (if we assume that it can accurately callback on the specified time period)?
That's the question that interests me.
yes it will...
but I prefer to make one-shot settings to the next event based on timeprocessing for previous event, because in my code MM events do not occure with equal intervals...
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Please tell me that neither MoveObjects, nor ProcessInput, nor anything else in that loop in AdvanceClock does any drawing, or other operation that might take so much as a millisecond?
If either of those do, then that would be your problem. You need to decouple your fixed rate logic updating, from your frame rate. If your logic update rate is 60fps but your PC can't manage to draw as fast as 60fps (possibly due to other programs using CPU) then you will get the problem you mention.
timeGetTime should really only be called ONCE at the start of that function, and everything else should simply be code that decides on how many logic steps to execute, or how long to sleep. The fact that you're calling it in a loop suggests you expect some things in that loop to take a little while, which as per the above, would be bad.
Make (1000.0/60.0) a named constant instead of hard coding 16.666...
I think another problem comes from the fact that you're asuming that AdvanceClock is called exactly 60 times per second. (Due to the Actual += 16.66666666666667; line near the start). However you then assume it isn't being called 60 times per second shown by that fact that you go on to compenstate for having slipped behind by adding Actual += 16.66666666666667; in the loop later on.
You can't do both! You can either rely on AdvanceClock being called exactly 60 times per second in which case you wont need this code to contain loops, or you allow it to be variable, in which case you need to remove the first occurrence of Actual += 16.66666666666667;
So, which is it?
Last edited by iMalc; 02-23-2008 at 02:45 PM.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
You're apparently misunderstanding a bit. I ran an experiment a while ago to get the timer. If you haven't already, download The Interactive Animation V1.1 and run it. It uses the exact same timer system I have posted. The motion is very smooth when CPU usage is low. My experiments have told me that the difference in the times between each timeGetTime() call is variable so I had to account for this, the reason for the use of "Actual" being involved. The frame rate is all over the place otherwise which doesn't provide smooth motion. If you were to check the difference in the times, you'll see that it is never a constant 16.67, but rather varied from 15 to 20. This is what happens, step by step:
1. After the previous frame was processed, I increase actual by 1/60 second.
2. I call the timeGetTime() function to get the current clock.
3. I find what the offset of this is from actual.
4. If the time returned is below actual, I make it wait a bit longer and I repeat.
5. When the offset is at least what actual is, I then proceed to processing the rest of what is needed for the frame - adjusting the positions, processing input, then drawing at the end.
If there was half-second lapse in rendering frames, the clock is advanced accordingly with the positions and input processed as needed. Yes, this could use some optimizations and that's an easy fix. If you want to see for yourself, this is the same piece of code I used in my experiment using the exact same principal as my current timer.
Running this, I see something like this screenshot. If the timer was constant, in which it's not, you should see two 16's and one 17. That's not what I see. I see some occasional 18's and 15's as well. Without switching to Windows Task Manager, I see an occasional 20 pop up with a 12 or so following after it. If I do switch to Windows Task Manager, the offset gets even more extreme with numbers as big as 44, causing lost frames (a jerk in my program). When my Norton was running its automatic updates, I ran it during this and instead of typically 50 lost frames, it went to over 100 with lots of 30's, 40's, and an occasional 50 thrown in causing 2 lost frames (a big jerk). It's all over the place. Run it and you'll see this same thing. Run "The Interactive Animation" and get to around 50 mph horizontal speed (use the left and right arrow keys). If you don't have any background tasks, the motion is very smooth and there aren't any lost frames (meaning very smooth motion). But, when you switch to Windows Task Manager or have some CPU intensive task running, you'll see that the motion becomes jerky. If you were to run the timer test doing the same CPU intensive task running in the background, you'll see the same effects - lots of lost frames.Code:void HighPrecClockTest() { double CurrentTime, StartTime, PrevTime, actual, offset; char processed = 0; double Frequency = 60.0; unsigned int cycles = 1800; unsigned int process_count = 0; unsigned int LostFrames = 0; unsigned int TotalLostFrames = 0; timeBeginPeriod(2); // 2 milliseconds is the starting point CurrentTime = timeGetTime(); StartTime = CurrentTime; PrevTime = CurrentTime; actual = CurrentTime; offset = 0; while(process_count < cycles) { if (processed == 0) { LostFrames = 0; // reset actual += (1/Frequency*1000.0); // increment for the given frequency } offset = CurrentTime - actual; if (offset < 0.0) // if not long enough of a delay { Sleep(1); // wait a little longer CurrentTime = timeGetTime(); // reobtain the time offset = CurrentTime - actual; // and the offset processed = 1; // prevent adding the actual value continue; // restart the loop } while (offset > (1/Frequency*1000.0)) // excessively long delay (as from sudden stalls due to sudden CPU usage) { LostFrames++; TotalLostFrames++; actual += (1/Frequency*1000.0); // boost the actual time to keep in sync CurrentTime = timeGetTime(); // reobtain the clock offset = CurrentTime - actual; // and the offset } Sleep(1); printf("Took %5.4f milliseconds to process this cycle.\nCurrentTime %4.4f - StartTime %4.4f\nactual %4.4f - offset %3.5f - Lost Frames: %d\n\n", CurrentTime-PrevTime, CurrentTime, StartTime, actual, offset, LostFrames); PrevTime = CurrentTime; CurrentTime = timeGetTime(); processed = 0; process_count++; } timeEndPeriod(2); printf("It took %5.4f milliseconds to process %d cycles, %d of these lost.\nThat's a %3.5f millisecond accuracy!\n\n", CurrentTime-StartTime, cycles, TotalLostFrames, (CurrentTime-StartTime)/((double)cycles+(double)TotalLostFrames)); }
Last edited by ulillillia; 02-23-2008 at 05:41 PM. Reason: Fixed width
High elevation is the best elevation. The higher, the better the view!
My computer: XP Pro SP3, 3.4 GHz i7-2600K CPU (OC'd to 4 GHz), 4 GB DDR3 RAM, X-Fi Platinum sound, GeForce 460, 1920x1440 resolution, 1250 GB HDD space, Visual C++ 2008 Express
I've enhanced the timer function some to get rid of those two function calls at and whether I use an abnormally high quality for my ground to cause a lot of drawing and 15 fps, even with the two function calls, it could still keep up quite well. The optimizations simply multiply the 60 fps distance traveled by the number of frames to process. If it lags behind, this amount increases. If there's a dropped frame, instead of covering the distance of 60 fps, it's doubled (then tripled if many are dropped at once). I have the timer increment by this exact amount for one reason - monitor synchronization. If I just used whatever, I'd get a ripping effect which looks even worse than the jerking....
High elevation is the best elevation. The higher, the better the view!
My computer: XP Pro SP3, 3.4 GHz i7-2600K CPU (OC'd to 4 GHz), 4 GB DDR3 RAM, X-Fi Platinum sound, GeForce 460, 1920x1440 resolution, 1250 GB HDD space, Visual C++ 2008 Express
There is a problem that is causing jerky inconsistent frame rates when CPU usage of other programs increases. The severity of the problem is related to the amount of CPU usage of those other programs. Right?
You're asking for an imaginary solution ("raising priority of a function") to fix the problem. This shows that you do not understand how to properly fix the problem.
Does this "it takes 20 seconds for 1 second to show in the timer", mean that your program logic isn't being updated at the normal rate when CPU usage increases? If not, what did you mean by that? You're not explaining yourself very well.
You certainly cant expect the frame rate to stay at 60fps if the program doesn't get enough CPU time, but since updating the game logic takes very little time, you can maintain that at least in real-time. Your code looks like it is attempting to do that, but I'm pretty there is a logic bug. That bug is not present in HighPrecClockTest.
In any case, once you can guarentee that your game logic is updated the right number of times in-between frames, so as to be a fixed update rate, you can simply render as few or as many frames as CPU usage allows and it should appear smooth (assuming the frame rate doesn't drop too low).
I have a software 3D engine that uses fixed-step logic at 60fps, and variable fps. It runs smoothly whether it is a background app or a foreground app, and although fps drops when it can't get enough CPU time, it remains smooth. In fact I can run two copies side-by-side and both stay smooth.
So it is do-able, and I know how to do it.
You've not answered any of my questions. Instead you've gone and made an optimisation which my experience is telling me you'll be best to undo again. Just increment the logic as per each fixed step. Don't trying and multiply it by the number of frames. What that will do is change the behaviour slightly depending on CPU usage. No matter how hard you try and prevent that, it'll bite you in the ass sooner or later.
I know that if I had your entire code I'd have the problem fixed by now. (Please don't give me your code though)
Now if you still want help, please start answering some questions.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
What I meant by "it takes 20 seconds for 1 second to show in the timer" is that, if the CPU usage for all other tasks is 95%, the effect is that of processing at 3 fps instead of 60. 20 seconds would pass in the real world when, ignoring the adjusting for dropped frames (which doesn't take much to get), only 1 second of the actual game's time would pass. I get a somewhat similar case with Virtual Dub and this is not my program. If Norton begins running its automatic updates (which is at normal priority), the video plays back very jerky even though VD hardly uses any CPU, unlike my game. If I bump up the thread priority to "above normal", the jerkiness vanishes entirely.
I don't know what the difference is between the two functions as they otherwise identical. Haven't you checked out "The Interactive Animation"? The timer system used with that is practically the exact same as the one in my current project and was derived directly from the "HighPrecClockTest" function. Comparing the two, the only difference I see is the lack of the math for determining the frequency (my current project uses a constant, but both evaluate to the same amount). The same problem is present in both functions. Note the delays reported from my screenshot. You see lots of 16's, 17's, 18's, and 15's, but occasional oddballs. These odd balls are due to other programs running in the background using the CPU and the exact same effect is present in my current project. I don't know what your other questions are outside the 20 seconds for 1 thing. The only change I made to the function was the replacing of function calls for an if statement:
When a frame is lost, the distance traveled is multiplied by "FramesToAdvance", which is normally 1. That was the only change made. Even then, I don't see what's wrong....Code:void AdvanceClock() { ... while (Offset > (16.66666666666667+RangeLimit)) // excessively long delay (as from sudden stalls due to sudden CPU usage) { if (AnimationMode != FrameAdvanceMode) // only for normal mode { FramesToAdvance++; } Actual += 16.66666666666667; // boost the actual time to keep in sync CurrentTime = timeGetTime(); // reobtain the clock Offset = CurrentTime - Actual; // and the offset } }
High elevation is the best elevation. The higher, the better the view!
My computer: XP Pro SP3, 3.4 GHz i7-2600K CPU (OC'd to 4 GHz), 4 GB DDR3 RAM, X-Fi Platinum sound, GeForce 460, 1920x1440 resolution, 1250 GB HDD space, Visual C++ 2008 Express
As I said - drop your solution with Sleep - you will NEVER get the accurate timimg using this function
or switch to MM timers
Or at least(!) replace Sleep with WaitForSingleObject (or equivalent) on the handle of the current thread
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Comment out sleep and the CPU usage goes up a lot and still doesn't correct the problem.... And I am using multimedia timers, that's what timeGetTime, timeBeginPeriod, etc. are!
High elevation is the best elevation. The higher, the better the view!
My computer: XP Pro SP3, 3.4 GHz i7-2600K CPU (OC'd to 4 GHz), 4 GB DDR3 RAM, X-Fi Platinum sound, GeForce 460, 1920x1440 resolution, 1250 GB HDD space, Visual C++ 2008 Express
You continue to say that. But it does not make you right.
Timer - is a mecanism to run some function on event.
You do not have events set in your application - so you do not have any timers.
Reading time and waiting - is NOT timer usage, you just read time and wait (in a horrible way). So start to beleive it.
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
So what am I supposed to do then?
High elevation is the best elevation. The higher, the better the view!
My computer: XP Pro SP3, 3.4 GHz i7-2600K CPU (OC'd to 4 GHz), 4 GB DDR3 RAM, X-Fi Platinum sound, GeForce 460, 1920x1440 resolution, 1250 GB HDD space, Visual C++ 2008 Express