Confusion regarding the storage of floating numbers on various architectures
Hi,
consider following 2 programmes:
Programme 1:
Code:
int main(){
printf("%f\n",7/2);
return 0;
}
Programme 2:
Code:
int main(){
int a = 7;
int b = 2;
printf("%f\n",a/b);
return 0;
}
Programme 3:
Code:
int main(){
int a = 7;
printf("%f\n",7/2);
return 0;
}
I am getting different outputs in programme 1 and programme2 . Output of Programme 3 is same as that of 2.
I expected the output of these programmes to be machine dependant as it depends on how a floating point number is stored.
But I am unable to understand why the output of programme 1 and programme 2 is different? Both the programmes actually do the same thing. Has it something to do with the organization of the programme as well?
I am using a debian machine and the normal gcc compiler.
Regards,
Pratik