what is the difference?? one is used for 32 bit machine and one is for 64 bit machine?? say that I have a number in 32 bit and I want to convert it into 64 bit, how do I do this?
Printable View
what is the difference?? one is used for 32 bit machine and one is for 64 bit machine?? say that I have a number in 32 bit and I want to convert it into 64 bit, how do I do this?
On a 32-bit machine, int and long are (most likely) the same size. On a 64-bit machine, long may be 32 or 64-bit depending on the choice of the system designers.
Many compilers support "long long", which is 64-bit in most architectures.
--
Mats
say that I have a long long int, that would be useful if I use it in a 32 bit machine right??
Maybe and maybe not. It depends on the compiler.
Long long int is 32-bit in Visual Studio, for example.
I'd say use the (u)intXX_t types instead. For example,
uint32_t - Unsigned 32-bit integer.
uint64_t - Unsigned 64-bit integer.
No confusion about the size.
I really don't understand why they couldn't have added such types to the standard and locked them in place.
It's supposed to be in a header called stdint.h.
so if I would want to do a program that extracts the sign mantissa and exponent from a 32 or 64 bit machine, and I specify this on my sizes.h, will this be right??
#if defined(MACHINE32)
typedef int int_4;
typedef unsigned int int_u4;
typedef long long int int_8;
typedef unsigned long long int int_u8;
#elif defined(MACHINE64)
typedef int int int_4;
typedef unsigned int int_u4;
typedef long long int int_8;
typedef unsigned long long int int_u8;
#else
#error 666
#endif
int8_t for signed
uint8_t for unsigned
Basically "u" is for unsigned, then it's "int", then you specify the size in bits and then add "_t" and you have four type.
Say you want an unsigned integer of 16 bits. That would be:
uint16_t
Want a signed one?
int16_t
No. A typedef creates an alias for an existing type. (Deja vu?) Even the uintN_t are optional additions to C99.
Use a type whose range is appropriate for the value it is supposed to hold.
so if I want to have an int_4 or int_u8 in a 64 bit machine, how can I do that?
Firstly, I'd still suggest using the (u)intxx_t types, and secondly, the types will be right if compiled as 64-bit code.
You just use (u)int32_t and (u)int64_t where appropriate in your code and you will get 32-bit and 64-bit types regardless if it's a 64-bit machine or a 32-bit machine.
Of course, your code needs to be compiled as a 64-bit app, as well, otherwise it won't work.
but is there any other alternative to that to what the general 64 bit for a 4 byte unsigned int for example??
These types are defined or typedefed to the appropriate type, so for example the uint32_t will always be 32-bits regardless of machine.
I think that's what you're asking?
what if I am asked to create my own type defined here
I did not test if this is true (and i don't know which version of VS you are talking about, but i'm quite sure the 2005 long long int type is 64-bit) but the standard (C99) says that the long long int type should be able to store at least a 64-bit value.
Of course, few compilers are 100% standard-compliant.