I have been wondering why there is a limit to the size of numbers when you are programing? Why is 1.0e200 any different from 1.0e2 when it comes to how computers handles numbers. Also, is there a work around for this? Maybe by have the numbers as strings and than have some special functions.
Why are there two floating point types (float and double) and when to use who? I know that there is a limited precision for floats, if I remember right its something like 0.0000001, so is this a problem is "real life"?
Thanks for your time