I came across this question. But still i am not able to understand why the output tends to come to 64.
[insert]Code:# include <stdio.h> # include <stdlib.h> void main() { int a=320; char *ptr; ptr=(char *)&a; printf("%d ",*ptr); getch(); }
I came across this question. But still i am not able to understand why the output tends to come to 64.
[insert]Code:# include <stdio.h> # include <stdlib.h> void main() { int a=320; char *ptr; ptr=(char *)&a; printf("%d ",*ptr); getch(); }
I wouldn't be too concerned about where a cast of your computers first variable's memory location, might be. The latter will obviously vary from system to system.
So in binary a = 101000000. You set your ptr to point at a, and since you are presumably on some little-endian machine, you get the least significant byte of a, which is 0100 0000 = 64.
You're just printing the value of the first byte of the integer - had you been running the program on a big-endian machine the output would have been 0. In other words, (assuming 32 bit ints on your system) the value of 'a' in hex is 00 00 01 40, and since it's stored in little-endian format the first byte is 0x40 (64 decimal).
thanks i understood in now but then this is os specific as in some other machine it might as well be 0 (if it were to be big endian)
C programming resources:
GNU C Function and Macro Index -- glibc reference manual
The C Book -- nice online learner guide
Current ISO draft standard
CCAN -- new CPAN like open source library repository
3 (different) GNU debugger tutorials: #1 -- #2 -- #3
cpwiki -- our wiki on sourceforge