OK, I have a lot of integers, and I want to sorta simulate floating point by treating all numbers as if they're several times smaller than they really are. However, in the end, I'll still have to divide by the number I used. Now, I heard dividing is relatively slow, so I thought it should be possible to speed it up by dividing by 256, by reading only the first 3 bytes of the int. How should I go about this?