If I create a variable as a float type and then print this number out to the screen with anything other than a %f format, such as %d, I get a *seemingly* random number.
Is C doing some calculation against my initial float type to try and turn it into a %d output?
The code above results in an input of 10 displaying as "1076101120". I understand that I need to use %f, but would like to know how the printf comes to output the long string of "1076101120"
printf("Output in decimal format: %d", data);