1. ## double VS UINT64

Hi.

I'm sorry for asking such a simple question, but I don't really know how to look for the answer - what is bigger, UINT64 or double? Can I safely do the casting

UINT64 myUINT=...something;
double mydouble = (double)myUint;

without losing any digits?

And if double is smaller, what is it's equivalent in the form
int (which is 32bits) -> __int64 (double amount of bits) -> UINT64 (and one more position for not having minus)?

Thanks!

2. Both are 8 bytes (at least on x86 and x64, though other platforms may vary). However, they are not safe to cast between because double is floating point (holds decimals!) type and UINT64 is an integer (holds on whole numbers!).

3. Originally Posted by Elysia
However, they are not safe to cast between
The question arises from the following piece of code. Obviously while dividing two ints you might get a fraction...This is where double comes in handy...no?
Code:
```UINT64 timeNow, timeLast, nSystemClockFrequency;
double deltaTime = (double)fabs(timeLast - timeNow) / (double)nSystemClockFrequncy;```

4. Double can hold a integer number precisely if this number is between 2^53 and -(2^53), so you'll start to lose precision if your UINT64 number is above 2^53 (if i'm right; this stuff is getting a bit old in my head). Since the maximum value of a UINT64 is 2^64 - 1, you can't safely cast it to a double if you want to keep all significative digit.

So, looking at your code, what you are doing is fine. Anyway i don't think that the result of (timeLast - timeNow) will be close to 2^53, because, let's say you have a 2.5 Ghz machine, it will take 3 602 879 seconds to reach this number of clock cycle, which is 1 000 hours or 41 days. And between you and me, if you need nanoseconds precision for something who takes more than 41 days, you might be a little bit retarded.

By the way, is using fabs() makes sense in your case ?

5. Sure, if you want to throw away the decimals after a division, then you can convert from float/double back to integer. However, even though double is 8 bytes, it does not have the same range as UINT64.

6. Double can hold a integer number precisely if this number is between 2^53 - 1 and -(2^53 - 1), so you'll start to lose precision if your UINT64 number is above 2^53 - 1 (if i'm right; this stuff is getting a bit old in my head). Since the maximum value of a UINT64 is 2^64 - 1, you can't safely cast it to a double if you want to keep all significative digit.
I believe that can be checked with:
Code:
```#include <iostream>
#include <limits>
#include <windows.h>

int main()
{
std::cout << "double: " << std::numeric_limits<double>::digits << "\n"
<< "UINT64: " << std::numeric_limits<UINT64>::digits << std::endl;
}```
If the result for double is less than that of UINT64, the maximum UINT64 value cannot fit into the mantissa of a double.

However, even though double is 8 bytes, it does not have the same range as UINT64.
The range of a double is greater than that of a UINT64 due to the exponent. foxman's point is that of a loss of precision when converting from UINT64 to double because the UINT64 value may not fit into the mantissa of a double.

7. Originally Posted by foxman
So, looking at your code, what you are doing is fine. Anyway i don't think that the result of (timeLast - timeNow) will be close to 2^53, because, let's say you have a 2.5 Ghz machine, it will take 3 602 879 seconds to reach this number of clock cycle, which is 1 000 hours or 41 days. And between you and me, if you need nanoseconds precision for something who takes more than 41 days, you might be a little bit retarded.

By the way, is using fabs() makes sense in your case ?
1) Got you about the double part. The difference between following samples is just too low...But if I subtract two numbers, which are both larger than double, does it matter in any way?

2) fabs is important here because you can't tell if the counter is in its negative or positive values, therefore the subtraction will be either negative or positive.

8. Originally Posted by misterowakka
2) fabs is important here because you can't tell if the counter is in its negative or positive values, therefore the subtraction will be either negative or positive.
Kurt

9. Originally Posted by ZuK
Kurt
:-) stupid me...

5-4 = 1
-4 - (-5) = 1

the counter gets more positive each time...

But another question:

Should I consider the option that the counter reached to the end of its cycle and turns negative all the sudden?...

Meaning, if the counter is up to 100 than

count1 = 98
count2 = 2

diff = 2-98 = -96
actual diff = 4...But only if I know that the size of the counter is 100...and I don't know the size of the counter...

10. Originally Posted by misterowakka
...and I don't know the size of the counter...
You do. The size of your counter is the maximum value of an UINT64.
But then an UINT64 counter doesnt overflow very often.
Kurt

11. Originally Posted by ZuK
You do. The size of your counter is the maximum value of an UINT64.
But then an UINT64 counter doesnt overflow very often.
Kurt
I do understand the logic of what you're saying. But how come I've managed to see quite often a negative value returned from QueryPerformanceCounter()? Because I didn't convert it to UINT64?

Besides, When does this counter start to run? Each time I turn on the computer? Because according to what Foxman correctly claimed before it will take about 40 days to a fast machine to reach to the end of a UINT64. (I'll turn off my computer once a month :-) ).

12. Originally Posted by foxman
Double can hold a integer number precisely if this number is between 2^53 and -(2^53), so you'll start to lose precision if your UINT64 number is above 2^53 (if i'm right; this stuff is getting a bit old in my head). Since the maximum value of a UINT64 is 2^64 - 1, you can't safely cast it to a double if you want to keep all significative digit.
Actually, it's 2 ^ 63.

13. But how come I've managed to see quite often a negative value returned from QueryPerformanceCounter()?
Couldn't answer to this one since i don't know the QueryPerformanceCounter() function and i don't want to look it out. I already heard about it, but i know one "easier" way to acces to clock cycle counters on IA-32 / Intel 64 architecture processor. QueryPerfomanceCounter looks too ankward for me.

Foxman correctly claimed before it will take about 40 days to a fast machine to reach to the end of a UINT64.
In fact, what i said is that it would take 40 days to a 2.5 Ghz machine to complete 2^53 clock cycles. It would take ~234 years for that same machine before completing 2^64 clock cycles.

Should I consider the option that the counter reached to the end of its cycle and turns negative all the sudden?...
Even if the counter had chance to overflow, it wouldn't make a difference. (if the number returned by the counter is a unsigned number)

That said, you might want to read a bit on how computer represent numbers and how it does arithmetic operation (that said, i have no good link to refer you; maybe you could look on wikipedia and/or google).

Basically, if you substract 2 unsigned integer, the result will always be equal to the difference between the 2 numbers.

This program may convince you (may not compile if you aren't using some Microsoft compilers):
Code:
```#include <stdio.h>

int main()
{
unsigned _int16 u1 = 0x0010;
unsigned _int16 u2 = 0xFFF0;    // Max value of a unsigned _int16 is 0xFFFF
unsigned _int16 r1;

r1 = u1 - u2;

printf("&#37;04X - %04X = %04X\n", (unsigned int)u1, (unsigned int)u2, (unsigned int)r1);

return 0;
}```
Actually, it's 2 ^ 63.
Max value for 64-bits unsigned integer is 2^64 - 1. Like max value of 8-bits unsigned integer is 2^8 - 1.

14. Originally Posted by foxman
Max value for 64-bits unsigned integer is 2^64 - 1. Like max value of 8-bits unsigned integer is 2^8 - 1.
Did you look that up?
Actually, I just took a quick look at 0x7FFFFFFFFFFFFFFF (signed) and it looked equal to 2 ^ 63.

But how come I've managed to see quite often a negative value returned from QueryPerformanceCounter()?
Have a look at http://cboard.cprogramming.com/showp...3&postcount=14 to see QueryPerformanceCounter (sort of anyway) in action and how easily it can overflow (takes about 17 minutes for it to overflow a signed int).

15. Originally Posted by Elysia
Did you look that up?
Actually, I just took a quick look at 0x7FFFFFFFFFFFFFFF (signed) and it looked equal to 2 ^ 63.
It better not have; since it ends in 1 (binary), it must be odd. 2^63 would be even. That number would have 63 1's, so it should be 2^63 - 1. (And of course, max unsigned would be 0xFFFFFFFFFFFFFF, since we don't need the sign bit, which is 2^64 - 1.)