1. ## Understanding floating point

<background>
I just got burned at a programming competition for not knowing the various functions related to precision/rounding floating point values. </background>
Anyhow, i'm having trouble finding an explanation of how floating point numbers are represented in memory, and the arithmetic behind them. (Something about "mantissa" and "base"?)

If anyone could help me out that'd be great. (Yes, I already searched google)
Gawd, i'm such a jackass.

2. http://scholar.hw.ac.uk/site/computi...1.asp?outline=

I learned about that in an architecture course last year. (mantissa sounded familiar, but I dont remember much about FPNs)

It was under a chpater called Data Representation. I searched google for Data Representation + Floating-Point Numbers, and came up with that link.

3. Here's a tutorial on floating point.

http://www.nuvisionmiami.com/books/a...oating_tut.htm

Here's some info, also about the IEEE standard on floating point.

http://research.microsoft.com/~holla...ieeefloat.html
http://cch.loria.fr/documentation/IEEE754/

4. I wrote a printf function in assembly which had to deal with floats like that (because I couldn't get the FPU code to work... )

You have one sign bit, positive or negative. The rest are divided into the mantessa and the exponent. (Remember that bits are represented using 1 and 0.)

Using their version of scientific notation, you come up with numbers like these:
1.001110110 x 2^111011
The 1 in the one's column exists for all possible numbers, so it's left out. Just the number part after the decimal point is left. The exponent is the number after "2^". (2^ is redundant also).

5. I couldn't remember how to format floats using C++ streams, so with about 1 minute left I got a bright idea to try printf... it worked, but I didn't submit it in time. Cost me \$1000 scholarship

Anyhow, those links are perfect. Thanks everyone.