Hi,
I have couple of basic questions about data types for integer constants in C.
In a fragment of code like the following:
I want to confirm the processing of this simple assignment statement inside compiler. I will state my understanding first: compiler sees the integer constant 10, it is unsuffixed decimal so compiler checks if 10 can fit into the range: int->long->unsigned long (assuming ISO C90, not C99). 10 fits into int, so compiler allocate 4 bytes(assuming machine is 32-bit) to store 10 with type as int. This storage space for storing 10 might be a register or some place temporarily. Up to now is the processing of the integer constant 10. Then continue to the assignment, 10 is promoted from int to long because i is long, and assigned to i.
If the processing is like the above, my questions are:
1. What's the internal data type used by compiler (gcc or any) when dealing with such situation? Compiler must have an internal data type to store 10 when trying to figure out its data type (int, long, unsigned long). Is this data type the biggest one? For example, if the biggest uses 8 bytes, then when starting to process every integer constant, it is stored in this 8-byte data type. Then to check against int, if not fit, check against long, then unsigned long. If internal data type uses the biggest one, what if integer constant is too large to hold inside compiler?
2. What's the use of the data type of an integer constant?
I can only guess two points:
1) compiler assign storage to the constant according to its type. For example, the temporary storage for 10, its size is 4 bytes because the type of 10 is int.
2) when an integer constant is in an arithmetic operation, its type is used for type conversion among operands and to determine result type. For example, result for 10*10 is int and result for 10*10ul is unsigned long.
Is there any use I don't notice?
3. I searched the following example on the web:
Code:
on KEIL--51 (2 bytes for int, 4 bytes for long)
long totalsec;
totalsec = 60* 60 * 24 * 365; //result is 0x3380, wrong
totalsec = 60ul* 60 * 24 * 365; //result is 0x1E13380, correct
on KEIL_ARM (4 bytes for int, 4 bytes for long)
long totalsec;
totalsec = 60* 60 * 24 * 365; //result is 0x1E13380, correct
totalsec = 60ul* 60 * 24 * 365; //result is 0x1E13380, correct
So on KEIL--51, the first example, int range is [-32768, +32767], 60*60*24 is 86400, which is already out of the range before multiplying 365. 86400 is truncated before multiplying 365 or is stored in the internal biggest data type and continue to multiply 365, the final result 31536000 is still stored in this internal data type. At last, it is converted to long. But if this way, the first example on KEIL--51 should be correct. So there must be truncation before assignment and no the internal biggest data type?
Actually I am quite confused with integer constant data type in C. I don't understand exactly its processing by compiler and why every code like
Code:
#define SECONDS_PER_YEAR (60*60*24*365)UL
has to have suffixes UL.
I really appreciate it if anyone can help me the correct and detailed way about the (integer) constant data type.
THANK you very much!!