views:

32

answers:

1

Is there any efficient algorithm for conversion between numeral system when the size of source integer is arbitrary?

For example, assume that there is an integer array {1, 4, 8} which is 148 in decimal format as an input. It might be converted to {9, 4} in hexadecimal format, or {2, 2, 4} in octal, or {1, 0, 0, 1, 0, 1, 0, 0} in binary format, or just {148} in 1234-ary format or something.

It's simple when the actual value can be expressed in word-size supported by machine. But when it goes to arbitrary size, I cannot find efficient way better than O(n^2).

A: 

Divide by the base, push back the module, rinse and repeat while quotient != 0.

So, for example convert 148 in base 16

148 / 16 = 9 r 4
  9 / 16 = 0 r 9

So, 148 in hex is 0x94. This shouldn't take so long.

klez
Unfortunately, it is not that simple when a number is big enough - think about 10^100, which can't be represented in 1 atomic integer in current computer architecture.
summerlight
Of course, some abstraction like big_int class can be used, but it makes the problem worse, since the time complexity of a modular and division operation is O(n^2) and we need O(n) modular and division operation for a conversion.
summerlight