1. Memory doesn't quite double every two years; if it did, we'd have almost 80 GB of memory by now.
Code:
```\$ python
Python 2.4.3 (#1, Jul 27 2009, 06:41:38)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
>>> x = 64
>>> for n in range(0, 20):
...     x *= 2
...
>>> print x
67108864
>>>```
Anyway, looks like about 300 years from now, with Moore's Law still holding, we'd have about 10**45 GB of memory . . . .
Code:
```>>> x = 2
>>> for n in range(0, 300/2):
...     x *= 2
...
>>> print x
2854495385411919762116571938898990272765493248
>>> import math
>>> math.log(x) / math.log(10)
45.455529345261155
>>>```

2. ...Wait, what kind of weird math did you use there?
log10(x) / log10(10)?

3. Originally Posted by Elysia
I don't find that to be much of benefit.
First off, all functions should really take an argument and return bool, or some other type to indicate failure instead of making it signed and having all negative numbers mean failure (waste of bits!).
Sure, but it can be very convenient to have an embedded error value.

Can you imagine the complications that would arise if a pointer value of NULL didn't mean "error" but in fact was a valid value? You'd have to have a lot of "bool is_this_pointer_valid" all over the place.

Secondly, signed mathematics on unsigned types should work:

typedef unsigned int uint;

I've left out the initialization on purpose.
I suppose so. Assuming overflow/underflow works as you'd expect. (Does the standard guarantee that, out of curiosity?)

4. Originally Posted by Elysia
...Wait, what kind of weird math did you use there?
log10(x) / log10(10)?
That's just the logarithmic base change identity (or whatever they call it). logZ(x) / logZ(N) is the same as logN(x), for any base Z.

Python's log() is actually base e, but like I said, it works for any log base.

So, for example, the log of 1024 in base 2 is
Code:
```>>> math.log(1024) / math.log(2)
10.0
>>>```
 Here you go: http://en.wikipedia.org/wiki/Logarit...nging_the_base [/edit]

5. Originally Posted by dwks
Sure, but it can be very convenient to have an embedded error value.

Can you imagine the complications that would arise if a pointer value of NULL didn't mean "error" but in fact was a valid value? You'd have to have a lot of "bool is_this_pointer_valid" all over the place.
Perhaps not, but for functions that returns time, -1 is a waste of bits.

I suppose so. Assuming overflow/underflow works as you'd expect. (Does the standard guarantee that, out of curiosity?)
Haha, a language expert would have to answer that one.

Originally Posted by dwks
That's just the logarithmic base change identity (or whatever they call it). logZ(x) / logZ(N) is the same as logN(x), for any base Z.
Ahh, of course. I don't know that formula off-hand.

Python's log() is actually base e, but like I said, it works for any log base.
Why the hell would it work with e as base? That's ln, not log.

So basically we'll have about 10^45 GB memory in 300 years.

6. So basically we'll have about 10^45 GB memory in 300 years.
Umm . . . yeah, that's what I said. 10^54 B of memory, I suppose.

Of course, there's undoubtedly a nice formula which would calculate that without running a brute-force loop. What can I say; I have Python, I use it.  I guess this comes close:
Code:
```>>> import math
>>> def logn(n, x):
...     return math.log(x) / math.log(n)
...
>>> 2**(logn(2, 65536) + 300/2.0)
9.3536104789177787e+49
>>> logn(10, 2**(logn(2, 65536) + 300/2.0))
49.970979280220874
>>>```
'Course, one should use 2GB there instead of 64K. [/edit]

Why the hell would it work with e as base? That's ln, not log.
I know. It's very confusing. But computer scientists seem to always use log 2() or log e(), both of which they call "log". The standard C function log(), for example, also uses base e.

7. Originally Posted by dwks
I suppose so. Assuming overflow/underflow works as you'd expect. (Does the standard guarantee that, out of curiosity?)
I think in this case, "that" actually equals "two's complement", and no the standard does not guarantee that. EDIT: No wait, you're using a cast, not a re-interpretation of bits. I still think it's implementation-specific, but I don't have the standard with me right here.