So, I know how to convert between bases, but I was wondering how the computer implements base conversion for I/O (I assume this is done by the OS?).

I can think that for relatively small numbers the conversion is fairly simple, but when I give the computer 18446744073709551615 as numeric input, it knows to store it as 11111111111111111111111111111111111111111111111111 11111111111111, and then display the decimal again when asked to interpret the bits as an integer. The only algorithms I know would get really slow with such large numbers, so how is the computer doing this?

I started to think about this because I was wondering how to implement a program that would read a binary file and print it as a huge decimal number (just for giggles, I guess).

Thanks for any insight.