Originally Posted by
iMalc
That at first seemed like a great idea, and would work for people who have implemented something where each decimal digit is stored in an int or something wasteful like that, but it wont work for a "wasteless" implementation where the bigint is an array of unsigned longs. The reason being that the result of the subtraction would have to range from 0xFFFFFFFFU - 0 to 0 - 0xFFFFFFFFU. So that's -0xFFFFFFFF to +0xFFFFFFFF, requring an extra bit, as well as casting to a signed datatype. Of course it will work if you cast both to a larger datatype first, however I imagine the efficiency will then come out about the same as the original way I posted. Well, maybe still a little ahead if you're lucky.