# Thread: a few opengl(or general programming) questions

1. ## a few opengl(or general programming) questions

Most Opengl functions take floats as there arguments. I have been programming and what I have noticed is some people pass arguments with the f ex:
Code:
`glColor3f(0.0f,0.0f,1.0f);`
I just use 0.0,0.0,1.0
I know these are doubles but is it bad not to implicity tell them the argument is a float?
--AND--
Code:
`fread(&image->sizeY, 4, 1, file))`
I found this on NeHe and was wondering. Isn't this bad practice, the second argument that is? Shouldn't it be sizeof(unsigned int) and not 4?

2. They are not doubles, they are floats, and I believe the postfix .0f just tells the compiler that there is definitely nothing beyond the last decimal place given.

For the second one, most would say yes, it is bad practice, but you will hardly ever get burned for it. In an ideal world, you will always use the sizeof() operator, but I wouldn't worry about it too much otherwise.

edit

until you get a job writing the next distribution of linux at your multi million dollar programming agency. In that case, always use sizeof, just to seem more professional

3. ok I was just checking myself. I am still wondering on the first question thouhg
Floating-point constants contain a decimal point (123.4) or an exponent (1e-2) or both; their type is double, unless suffixed. The suffisxes f or F indicate a float constant; l or L indicate a long double.
Source: The C Programming Language 2ed
I was thinking that is it possible for a conversion error to occur when I pass doubles?

4. A 'float' and a 'double' are both two different types of decimals, with the latter having double the precision. By this definition, they are not the same, but if you ask 'double' to be a generic term for 'decimal', then yes they are both doubles.

edit

in short, a 32bit float cannot ever have the same precision as the 64 bit double

edit again

You should not get a conversion error, most compilers will just give about 80,000 warnings about truncations (for visual studio, this depends on your warning level).

5. I guess I'll start adding the f. I compile with -Wall -pedantic and I don't get any warnings.

6. 0.0 = double
0.0f = float

Yes use the f's. Not only will it be easier on the compiler but I believe it was probably also give you a notch or two on speed at runtime....depends on the compiler.....

7. that is what I figured

8. how bout a little experiment....

Code:
```#include <iostream>
using namespace std;

int main(int argc, char** argv) {

// compare double to float
if( 0.2 == 0.2f ) {
printf("floats and doubles are essentially the same\n");
}
else {
printf("OMG, there is a difference in precision that could affect my program!\n");
}

cin.get(); // for the windows users
}```

ill save you the suspense, the output (for most compilers) is:
"OMG, there is a difference in precision that could affect my program!"

9. I did it with the clock() function and got the same results both ways; I guess my programs are too small.
Code:
```The interval was: 0.020000 seconds
--WITH THE f--
The interval was: 0.020000 seconds```
I did get 0.03 seconds with the f but I had more occurences of the 0.03 without the f.

10. the 'f' suffix, tells the compiler to handle the literal as a float instead of its default double handling...

on the second question, no its not necessarily bad practice,
this idea i think is a carry-over from memory allocation where specifing the actual size of the type is critical, otherwise it depends on context... if you decide to compile this on a different computer sometime, think of it like this, if the data in the file is written as four bytes and the sizeof(unsigned int) happens to be 8 on some other machine... then you can imagine the trouble... maybe they should use a define or something for the size but... its not necessarily bad practice.

btw: um the f suffix should give no runtime speed change, and should give little or no compile time change as far as i know(i could be wrong) its just for warning supression... because most compilers treat float point literals as doubles by default.

11. Originally Posted by linuxdude
I did it with the clock() function and got the same results both ways; I guess my programs are too small.
Code:
```The interval was: 0.020000 seconds
--WITH THE f--
The interval was: 0.020000 seconds```
I did get 0.03 seconds with the f but I had more occurences of the 0.03 without the f.
If you want to see any difference you might want to do more with the variable. Try setting up a for loop that runs say 100,000 times or more for both the float and the double and see what that produces.

12. I'm not sure where the misnomer between floats and doubles comes from but it is simply that....a misnomer.

Based on research that Fordy and I have done in the past with pure assembly FPU opcodes and programs and based on Intel's IA-32 manuals there is absolutely NO performance hit for using doubles over floats or vice versa.

Simply put the default data width of the FPU is far larger than float or double. So let's say that just for discussion sake that float is faster than double. That would imply that the conversion from float to the native data type of the FPU is slower than double to the native data type. This, in itself does not make sense logically since double has more bits and, in fact, wastes less bits in the conversion than does float.

The reason you see absolutely no difference in the runtimes is because there isn't any. According to the Intel manuals the opcodes for converting floats and doubles to the native data type of the FPU take exactly the same amount of clock cycles. As well the conversion from the native back to double or float takes the same amount of cycles.

So according to Intel's docs...you could theoretically start with a float...convert to the native type....and end with a double with the same performance as if you started with a float...converted...and ended with a float.

I was under the assumption for quite some time that doubles were faster than floats because they were the native data type of the FPU. But I found out, thanks to Fordy, that this is not true. Floats and doubles take exactly the same amount of time to convert to the 80 bit data type of the FPU.

Taken from an older 486 manual the opcode to load any local real is FLD.

To push an m32 real or m64 real onto the stack requires 3 clocks on the 486. There is no timing information provided in modern IA32 manuals because the timing cannot accurately be predicted in all cases. However it is safe to assume that if the older 486 could do both float and double to 80 bit conversions at the same speed then the modern CPUs do as well.

FLD m32real -> 3 clocks
FLD m64real -> 3 clocks
FLD m80real -> 6 clocks

In C data types:

m32real=float
m64real=double
m80real=long double

So unless you are using long doubles in your code then you will not suffer any performance hit. Note that there is NO other opcode to load the FPU with local real data. So if your code uses floats and/or doubles then your compiler is using this opcode which translates to:

There is no performance hit clockwise for using floats instead of doubles or vice versa. Doubles take more memory and floats can be copied from memory to memory in one fetch where as doubles take 2, but clockwise the FLD opcode is the same for doubles and floats.

Try it and see.

It should also be noted that m32real division, multiplication, addition, and subtraction also takes the exact same amount of clocks as its m64real counterpart. Again, no performance hit. In fact nearly all opcodes dealing with real numbers (comparing and/or performing some other mathematical operation on them) have the same exact clock times for m32real as m64real.

According to the specs it is actually slower to attempt to store a 64-bit integer on the FPU stack and/or pop it from the FPU into memory. This probably adds weight to the argument in favor of the new 64-bit CPUs. It looks as though Intel had IA32 and IA64 in mind when they developed even as far back as the 486 given that most 32-bit and 64-bit operations take the same number of clocks.

Fordy and I did not try this on code that required heavy computations but this could be tested via a simple template class and creating two objects, one of type float and one of type double and profiling the code. Given that the compiler must use FLD somewhere regardless of whether or not float or double....I'm sure that both code fragments will run at the same speed.

Thus far your code profiles and numbers match what the Intel docs say.

13. Based on research that Fordy and I have done in the past with pure assembly FPU opcodes and programs and based on Intel's IA-32 manuals there is absolutely NO performance hit for using doubles over floats or vice versa.
That much should be quite obvious with no research, from the fact that the FPU is 80 bits, whereas both doubles and floats are less than 80 bit data types. There might be a performance hit if the FPU has to convert each data type sent to it to an 80 bit data type (behind the scenes) before processing (much in the same way a bool is converted to an int before sent to a 32 bit processor). But if this exists, it would be more or less the same hit. The only way to get deeper into this is to examine, at the electrical level, how the FPU is wired.

14. Well the reason we got into it was because I posted over at flashdaddee that doubles were faster because I read that somewhere. He insisted that floats were and when we tested code we found out that we were both wrong.

But you can still find this misinformation all over the place and I've even found it in some of my 3D books. Perhaps it is slower for a video card to process 64 bits at a time rather than 32 or perhaps it has something to do with the AGP bus - but at a CPU level there is no diff.

15. Agreed.