hi there, suppose you have got a 1000,000 digit number held in memory, and you need to represent the number in its smallest possible form, would either prime factorising the number be advisable or converting it into a completely different base, i.e. 60. You would have to use a different character set which would be greater than f (hex). The number will be held within a data structure, and can therefore be manipulated accordingly. Any ideas on this people?