Well since I have some free time, I decided to do a bit of testing to see
Here are the two functions:
Code:
static int Round1(int number, int place)
{
int i = 1;
if (place <= 0)
return number;
while (place > 0)
{
i = i * 10;
place--;
}
int r = number % i;
if (r < (i / 2))
return number - r;
else
return number - r + i;
}
static int Round2(int someNumber, int place)
{
int roundTo = 1;
if (place <= 0)
return (int)someNumber;
while (place > 0)
{
roundTo = roundTo * 10;
place--;
}
return ((someNumber + (roundTo / 2)) / roundTo) * roundTo;
}
I'm running the functions in a control loop 1,000,000,000 times.
When rounding to the nearest 100, both functions took exactly 40s. When rounding to the nearest 10,000, yours took 40s, and mine took 41.5s. I guess we can conclude that your function is faster when rounding to larger numbers