1. ## Very simple printf

Code:
```#include <stdio.h>

int main(void)
{
float x=123.23f;
printf("%f",x);
return 0;
}```
This code gives me the result 123.230003

Why do I see the 3 at the end?

2. Why are floating point calculations so..inaccurate?

EDIT:
Note that the "%f" with no precision specifiers gives you 6 digits after the decimal point, which for a float is enough to give you the bogus 3 out there. You could use a double to push the inaccuracy farther out right and have it not show up with the default "%lf" (you need the l in there for a double), but it's still not exact. Also, you can change printf to give less after the decimal point. For example, "%.3f" would give only 3 digits after the decimal.

Code:
```#include <stdio.h>

int main()
{
double x=123.23;
printf("%lf\n",x);
return 0;
}```
This one gave the result:123.230000

I firstly thought I had a mistake but after your reply I see there is an interesting area to discover. I will try to get some knowledge about binary representation of floating-point constants.

4. Understanding floating point is one of the more interesting things, and most misunderstood. It could get very deep mathematically as well. But as for how floating point is represented internally, it's worthwhile looking into it.