I ve a nagging question that I have been unable to figure out. From what I understand,
floats have a precision of 6 digits. And this is supposed to be 6 digits in total, not just 6 digits after the decimal point.
So the values that could be stored with precision in a floating point number are as follows,
12345.6 etc.. (i.e.) as the whole number part gets more and more digits, the number of digits that you can store precisely after the decimal point correspondingly decreases. Am I correct in my undestanding till here??
So my issue is, I have an array of floats with values ranging from One's to the thousands. However, when I print out the floats I see that all of the floats are printed out with 6 digits after the decimal point.
For instance a number such as 1000.82 would only be stored with 2 digits of precision right?. But while printing out I get something like 1000.824092. Is this some quirk of the printf statement that it automatically fills in junk numbers to fill up to 6 decimal points? Or is it that any number declared as a float must always have 6 digits after the decimal point?
I also tried rounding off all of the floats to 4 decimal digits. But that seems to have no effect. All floats are always printed out with 6 decimal digits.
If it is only at the printf that this filling out to 6 digits is taking place, I can go ahead and use the floats for further calculations, but if the numbers are stored in the memory itself like this, using them for calculations would mess up a lot of things. Thats why trying to get this cleared out.
I know using 'double' instead would save me a lot of trouble. But I am actually supposed to take this float array and transfer onto Graphics cards for parallel processing. And double precision functions are either not available or incredibly slow on most Graphics cards. Thats why I am trying to see if I can work it out with floats itself..