1. ## Binary Operator Algorithms

Okay, my HugeFloat class is working very well doing all standard mathematical operations (+ - / * %) (and fast; Pi to 10,000 digits squared took only about 12 seconds). Anyway I would like to implement binary operations (^ | & ~) but am thinking it may be more of a challenge than I bargained for. My data is stored with two digits of the true value in each element of a byte array. So for example, 1234 would be stored something like: [0] = 34, [1] = 12. I think that no matter how the separated data are stored it would be a tricky task to try anything that inherently relies on the bits being arranged in a particular order.

Do you have any bright ideas about how one might go about performing binary arithmetic given this situation? Any thoughts are appreciated. Cam on.

2. Since the operators are 'bitwise', it suffices to just go through and perform the operation bit-by-bit.
Note, however, that performing such an operation on a floating point number doesn't seem to have any useful meaning.

Cheers

3. Actually, no, it does not suffice to just go through and perform the operation bit-by-bit.
decimal 12 in binary is 00001100
decimal 34 in binary is 00100010
decimal 1234 in binary is 00000100 11010010
12 ^ 15 = 3 (00000011)
34 ^ 15 = 45 (00101101)
1234 ^ 15 = 1245 (00000100 11011101)
So, please do explain how you believe it suffices to go through and perform the operation bit-by-bit.

4. I didn't notice you had it set up in base 10. Here is what I was saying, although it is really for a binary based system. (It could be modified though, it'll just take a little more work.)
a = 1234, so a[0]=34, a[1]=12
b = 15, so b[0]=15

c: another number as large as needed (max(size(a), size(b))
i: last index of a
j: last index of b
k: max(i, j)
while both i and j in range
-- c[k] = a[i] ? b[j] where ? is the operator of choice
-- i-=1, j-=1, k-=1
And how you fill in the remaining bits of c is just a matter of what operator you are using.

*edit*
The complication you have with a base 10 setup instead of 2, 4, 8, 16, etc is that base 10 digits use up 1/clog(2) approx. 3.32192... bits, so you need to constantly be grabbing some information from the next chunk of data so you don't have fractional bits which you can't do bitwise arithmetic on.

5. well shifting bits (>> or <<) can either be multiplying or dividing by 2^n times....

shifting left 2
<< 2
is the same as timing by 2^2

shifting right 2
>> 2
is the same as dividing 2^2

becareful though if you knock off high order bits (when you got out of range..knock off significant bits).....

but frome what i learn bitwise operators are there to manipulate the bits (change the flags)...like turn them on or off...or get a certain bit or bits...etc....

also even though you are using base 10 representation (decimal representation) your computer represent your variable values in binary form already..so in reality your computer is already doing binary operations on your computer.....so now matter what representation your doing...base 10, base 8, base 16....its just a representation...

6. Not quite, the representation does matter here because of the way it is broken up. As I mentioned above, you end up with fractional bits (0 < x < 1 bits of information) which need to be compensated for. If I can work something out to deal with this in the next 15 minutes or so, I'll post it, otherwise, it'll be a while before I get back to looking at it.

7. Nextus: thanks for tossing in your 2 cents, but shifting is nice and easy (and btw there will be no problem of "knocking off significant bits" because these numbers are of unlimited length).

Zach: thanks for the input and trying to come up with some good ideas. I don't see a way in which that previous solution could work because taking a single value out of either operand is not equivalent to the values they hold with respect to the entire actual value. For example, taking the 12 out of 1234 and doing any bitwise math will not result in a valid answer because you would be modifying the value 12 instead of 1200.

A possible solution I am considering (which, I believe, would be far too taxing) would be to actually convert the number to binary (I'd have to think even more about how I could go about doing that!) then doing the bitwise arithmetic, then converting it back to a decimal value for storage... Not a very attractive alternative.

8. Originally Posted by LuckY
A possible solution I am considering (which, I believe, would be far too taxing) would be to actually convert the number to binary (I'd have to think even more about how I could go about doing that!) then doing the bitwise arithmetic, then converting it back to a decimal value for storage... Not a very attractive alternative.
As i stated:
also even though you are using base 10 representation (decimal representation) your computer represent your variable values in binary form already..so in reality your computer is already doing binary operations on your computer.....so now matter what representation your doing...base 10, base 8, base 16....its just a representation...and your computer is already representating them in binary form and doing binary calculations on them.....

9. Originally Posted by LuckY
Actually, no, it does not suffice to just go through and perform the operation bit-by-bit.
decimal 12 in binary is 00001100
decimal 34 in binary is 00100010
decimal 1234 in binary is 00000100 11010010
12 ^ 15 = 3 (00000011)
34 ^ 15 = 45 (00101101)
1234 ^ 15 = 1245 (00000100 11011101)
So, please do explain how you believe it suffices to go through and perform the operation bit-by-bit.
Wrong!

Code:
decimal 12 in binary is 00001100
decimal 34 in binary is 00100010
decimal 1234 in binary is 00000100 11010010
12 ^ 00 = 0 (00000000)
34 ^ 15 = 45 (00101101)
1234 ^ 0015 = 45 (00000000 11011101)
It's 1234^15, not 1234^1515.
Is your HugeFloat stored with ints?? Later you should use 1000000000 as base for each int and not 100, but that's another issue, and that would optimize a lot. Try to square it then.

10. Yeah, clearly you could not simply take one some information and just move on. It would be a growing process, which, as far as I can tell, would require converting everything on up to the size of the smallest operand (for &, but everything for | and ^) into binary (not necessarily all at once, but as I said, it would be a growing process).

The problem is, the operations &, |, !, &, etc rely on the fact that the representation is binary. Something to consider, though, is that the operations +, -, *, and / are independent of representation. So, what I'm getting at is, if you stored everything so that a[0] is the first byte/2-bytes/n-bytes, a[1] is the next block, etc., then you could do the bitwise operations fairly simply (as I mentioned in the first post), and +, -, * should not be too hard to implement either (you just gotta get yourself thinking in binary). The / operator is also doable, it just requires a little more binary thinking (I'm presuming you used something along the lines of long division which you could also do in binary).

Cheers

11. Nextus, the problem is that it isn't represented in binary. Each two decimal digits are represented as a binary sequence, but you cannot simply adjoin them without the number changing.

xErath, yeah, there was a small error when performing the operation on the second set of digits, but the point was still valid. Also, the really optimal size for each chunk of data would be the register size.

Cheers

*edit*
Here is a more illustrative example: Let a(b) mean a is written in base b.
119(10) = 01110111(2)
219(10) = 11011011(2)
119(10) ^ 219(10) = 10101100(2) = 172(10)

Breaking it apart (a = 219, b = 119)
a[0] = 2(10) = 10(2)
b[0] = 1(10) = 01(2)
a[1] = b[1] = 19(10) = 00010011(2)
a[0] ^ b[0] = 11(2) = 3(10)
a[1] ^ b[1] = 0
Hence a ^ b = 300(10) != 172(10) <<--- Representation does matter

12. [QUOTE=xErath]Wrong!

Code:
decimal 12 in binary is 00001100
decimal 34 in binary is 00100010
decimal 1234 in binary is 00000100 11010010
12 ^ 00 = 0 (00000000)
34 ^ 15 = 45 (00101101)
1234 ^ 0015 = 45 (00000000 11011101)
[QUOTE]

dont like to get technical though..but anything with a 0 in front is octal base??? so 0015 is really in decimal 13..and binary representation is 1101...so 1234 ^ 0015 (didnt check if 1234 was right for binary representation)...is exclusive ORing...equals to 0000 0100 1101 1111...

haha

i might be wrong...

13. Originally Posted by nextus
dont like to get technical though..but anything with a 0 in front is octal base
It would be if that was C code but it isn't. The same way this 00001100 would be eleven hundread octal, 576 decimal.

14. Originally Posted by xErath
Wrong!
It's 1234^15, not 1234^1515.
Actually, not wrong I was demonstrating that 1234^15 could not be equivalently derived by doing 12^15 and 34^15, but thanks for playing. We have some lovely partying gifts for you.

As i stated:
also even though you are using base 10 representation (decimal representation) your computer represent your variable values in binary form already..
You are entirely missing the point, dear boy. While each element contains binary data, they, as a sum, are not identical to the binary data of the value they represent. Get it?

what I'm getting at is, if you stored everything so that a[0] is the first byte/2-bytes/n-bytes, a[1] is the next block, etc., then you could do the bitwise operations fairly simply
While I see what you mean about it being a growing process, I don't follow. What does that mean (specifically the "first byte/2-bytes/n-bytes")? The problem with this as I see it (while still not fully understanding what you mean) is that you can not "grow" to meet the demand.

After rereading it, I understand what you mean. You're saying instead of storing 1234 as "12" and "34" it should be stored something like "4" and "210" (4=00000100 210=11010010; concatenated they are 1234=0000010011010010). A very interesting point, saving them in base 256... I implemented a big integer class using base 65536, but I found it to be too problematic because if I had several elements stored, each would have to be multiplied by 65536 to the power of i (the element's index) and that value is too large to do naturally, so I'd have to use another object for the power and then use that with another object to do whatever I need. In all actuality, I found it to be very, very terribly slower than my HugeFloat implementation. It's not only that, but that for seemingly every calculation you need to perform a / or % to retrieve or set an element's value and each one of those operations is so awfully greedy.
[/edit]

+, -, * should not be too hard to implement either (you just gotta get yourself thinking in binary). The / operator is also doable
As I mentioned, I've got all those suckers working like clockwork. It's just this tricky bit manipulation that is ailing me.

15. By growing, I just meant that you would not have to convert it all to binary at once, but you would need to keep track of progressively more information.

You can do the addition and subtraction with binary represented numbers quite easily. As for multiplication, you can do it with bitwise operators (so it should actually come out faster than doing it in decimal -- which is expected, as this is the natural representation of numbers in the computer). An example is probably the best way to explain this:

Say we want to multiply 5 and 22 (yes, I know, HUGE numbers here ).
5 = 101(2)
22 = 10110(2)

5 * 22 = 5 * 2 * 10^1 + 5 * 2 * 10^0 = 110
10110 * 101 = 1 * 10110 * 2^2 + 0 * 10110 * 2^1 + 1 * 10110 * 2^0 = 10110 * 2^2 + 10110
This can be computed using the addition routine as simple bit shifts. So, you never deal with the base to the power of the index, just with the index.

And the bitwise algorithm above is basically just: Do the operation bit-by-bit with corresponding bits.