Hallo,

I have been wondering why there is a limit to the size of numbers when you are programing? Why is 1.0e200 any different from 1.0e2 when it comes to how computers handles numbers. Also, is there a work around for this? Maybe by have the numbers as strings and than have some special functions.

Why are there two floating point types (float and double) and when to use who? I know that there is a limited precision for floats, if I remember right its something like 0.0000001, so is this a problem is "real life"?

2. Because the arithmetic registers in your CPU are limited in size. Double provides more accuracy over a broader range, at the cost of greater memory usage.

There are work arounds to the size of numbers, some 3rd party librarys can support numbers upto the size *limited by your computers resources*.

3. so is this a problem is "real life"?
Yes it is. Each value you store has some precision... So you think you have some value x, but it can be stored as any value in a range [x-epsilon, x+epsilon]. epsilon shows a maximum error you have. It can be no problem so far... But after you make any operation on your data the error is 2*epsilon, with each operation the error grows... After a lot of operaton you can not be sure that even the first digit in your result is accurate...

So with small number of operatons you can start with the float and be sure at the end that 3 or 4 digits you print all are accurate...

For long calculations that can bring big potential precision problems - you will start with double - to be sure that the printed part of the result is accurate... If you are interested in this question you can find a lot of data on the subject in the inet...
For example http://support.microsoft.com/kb/125056

4. Internally, the computer represents decimals using a binary form of scientific notation. Expressed in decimal terms, a floatwould be expressed as:
A * 10^B
Where A is accurate to about 7 decimal places.
Where -38 < B < 38.

These limitations are simply due to the fact that A and B are represented by a finite number of bits.

A double is thus named because it doubles the number of bits used to represent A and B. It is more accurate, at the cost of requiring more space.
A double, translated into power of 10 is roughly
A * 10 ^ B
Where A is accurate to about 15 decimal places.
Where -308 < B < 308.

If you need any more precision and range than that, then many compilers support a long double type which has more precision than double.

If you need perfect precision, then give up. There exist non-integral values which are fundamentally impossible to represent to perfect precision. Consider how you would represent 1/3 without rounding or how you would represent pi.

If you need infinite range, then you may need to roll your own implementation, but it can theoretically be done.

5. Originally Posted by h3ro
Why are there two floating point types (float and double) and when to use who? I know that there is a limited precision for floats, if I remember right its something like 0.0000001, so is this a problem is "real life"?
People worry about precision in the computer realm, as if precision was not also limited outside of that realm. When you are doing arithmetic on paper, you believe that you can be infinitely accurate, but you can't. A human simply doesn't have the ability to write down and manipulate an infinite number of digits. As a finite thing, a computer obviously can't do this either.

Sometimes people make a big deal about what numbers an FPU can and cannot represent precisely. To those people, I like to ask how they would write down the number 1/3 precisely in decimal. You can't, because you need an infinite number of 3's in 0.3333... The inexactness has nothing to do with binary, nothing to do with computers, and everything to do with the concept of base representation of numbers. The numbers 0.1, 0.01, 0.001, etc. happen to be impossible to exactly represent in binary (whereas 0.5 can be exactly represented). But worrying about this is just as pointless as worrying that you cannot write down 1/3 precisely in decimal.

Or take a number like pi, which is well-defined, yet cannot be represented in ANY base using a finite number of digits. Does this mean we can't write computer programs that use the value of pi? No...