# Thread: conversion from double to float - loss of data

1. ## conversion from double to float - loss of data

I have an assignment for programming that is actually pretty easy, if it were not for this error:
"Warning C4244: '+=' : conversion from 'double' to 'float', possible loss of data."

note: I MUST use float for result and rate, and int for duration.

For some reason, I'm allowed to do this
Code:
```float result,rate;
int duration;

cin>>rate;
cin>>duration;
result=rate*duration;
result+=rate*duration;```
But I'm not allowed to do this
Code:
```for(int i=0;i<slot;i++)
{
gross[i]=rate[i]* hours[i];
if (hours[i]>40) gross[i]+=.5*(hours[i]-rate[i]); //Error here
}
//the error is loss of data due to conversion from double to float
//declared datatypes:
//float: gross, rate
//int: rate```
Thanks ^^

2. You're "not allowed?" Who is not allowing it? This is a warning, not an error. The compiler is simply making sure you realize what is happening.

To eliminate the warning, provide an explicit cast. And try to get your head around the difference between a warning and an error.

3. .5 is a double because double is the default floating point type in C++. If you have to use float, not double, change your literal to a float by adding an f:
Code:
`if (hours[i]>40) gross[i]+=.5f*(hours[i]-rate[i]);`
>> note: I MUST use float for result and rate, and int for duration.
Did you mean you must use float, or did you mean you must use a floating point type? If you meant the latter, then I would use double.

4. Thanks Daved. This solves it for me =)

5. Question for the hardware people:

What can modern CPUs manipulate faster: float or double (if this can be answered)? Does is depend on 32/64 bit mode?

Thank you.

6. I'm not a hardware person, but since the standard allows float and double to be the same size, there wouldn't be any reason to allow a smaller floating-point type if it was also slower, so I would think that float would always be at least as fast as double.

Edit: Unless the implementation wanted to have 32- and 64-bit types coinciding with IEEE 754 float and double, but I still doubt that float would ever be slower.

7. maybe it's slower on some architecture and faster on another? I have no idea...

BTW: Does anyone knows a reference where I could look up all size constraints for all PODs defined by the C++ standard?

8. Does anyone knows a reference where I could look up all size constraints for all PODs defined by the C++ standard?
You can find them in Annex E (Implementation limits) of the 1999 edition of the C Standard, but I am not sure where to find a draft. Strictly speaking the C++ Standard refers to the 1990 version (same as 1989) of the C Standard, but I do not think these implementation limits have changed.

9. Originally Posted by robatino
I'm not a hardware person, but since the standard allows float and double to be the same size, there wouldn't be any reason to allow a smaller floating-point type if it was also slower, so I would think that float would always be at least as fast as double.

Edit: Unless the implementation wanted to have 32- and 64-bit types coinciding with IEEE 754 float and double, but I still doubt that float would ever be slower.
Yes, float as (on x87) always as fast or faster than double. Most other architectures are the same. Note however, that x87 will perform all calculations to full precision, and only the loading and storing of the value will be affected by the size. [Pedants: Yes, this is slightly simplified, but it's how things generally work].

Of course, in some cases, a shorter float will make code faster simply because it takes up less space, thus you can fit more numbers in the cache, and get more work done in the same amount of memory space.

And when using SSE or similar techniques, shorter float types fit more "floats" into one register, meaning more work can be done in a single instruction -> faster execution.

The rule about "float and double can be the same size" is much more to satisfy machines that only have one floating point type - it can still compile code that uses either float or double. Just like long int isn't necessarily bigger than int - it's to allow 32-bit machines to have a 32-bit int, that is the same size as a long.

--
Mats

10. x87?

11. Originally Posted by Elysia
x87?
x87 is the name of the Intel/AMD FPU. The name stems from the time when a 387 was something that you'd plug in next to the Intel or AMD 386DX processor. Of course, these days, the FPU is part of the overall processor. But the functional unit is still behaving the same in a "black box" world.

--
Mats

12. >> there wouldn't be any reason to allow a smaller floating-point type if it was also slower
Yes there would. You would allow it for space saving reasons. Think of short and int.

13. Huh, so the actual processor architecture is called x86 and the FPU in the x86 is called x87?

14. Originally Posted by Daved
>> there wouldn't be any reason to allow a smaller floating-point type if it was also slower
Yes there would. You would allow it for space saving reasons. Think of short and int.
Yes, good point. In fact, even if it was slower in terms of CPU, the extra cache efficiency for a large array might make up for it, resulting in greater overall speed (if the bottleneck was memory access).

15. Originally Posted by robatino
Yes, good point. In fact, even if it was slower in terms of CPU, the extra cache efficiency for a large array might make up for it, resulting in greater overall speed (if the bottleneck was memory access).
Not to mention that if someone sent you a file with IEEE-754 32-bit floats that you had to process, it would be a right pain if the compiler decided that you couldn't do that at all because float is a 64-bit double for speed reasons. It's one thing if the HW has no choice, but if the HW supports the format, you certainly don't want the compiler to stop you from using it.

[Not that it would be terribly hard to read the values into an array of the right type and write a 3-4 line assembler function to convert it, if the HW supports it].

--
Mats