views:

337

answers:

5

Hello,

Im working on a programming problem where i need to handle a number involving 100000 digits . Can python handle numbers this big ?

Thank You

+2  A: 

Sure it can:

>>> s = 10 ** 100000
Nadia Alramli
+2  A: 

It seems to work fine:

>>> x = 10**100000
>>> x
10000000000000000000000000000000000000000000000000000000000000000000000000000000
[snip]
00000000L

According to http://docs.python.org/library/stdtypes.html, "Long integers have unlimited precision", which probably means that their size is not limited.

Karl Bartel
+7  A: 

Yes; Python 2.x has two types of integers, int of limited size and long of unlimited size. However all calculations will automatically convert to long if needed. Handling big numbers works fine, but one of the slower things will be if you try to print the 100000 digits to output, or even try to create a string from it.

If you need arbitrary decimal fixed-point precision as well, there is the decimal module.

kaizer.se
+15  A: 

As other answers indicated, Python does support integer numbers bounded only by the amount of memory available. If you want even faster support for them, try gmpy (as gmpy's author and current co-maintainer I'm of course a little biased here;-):

$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'x+1'
10000 loops, best of 3: 114 usec per loop
$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y+1'
10000 loops, best of 3: 65.4 usec per loop

Typically, arithmetic is not the bottleneck for working with such numbers (although gmpy's direct support for some combinatorial and number-theoretical functions can help if that's what you're doing with such numbers). Turning the numbers into decimal strings is probably the common operation that will feel slowest...:

$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'str(x)'
10 loops, best of 3: 3.11 sec per loop
$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'str(y)'
10 loops, best of 3: 27.3 msec per loop

As you see, even in gmpy stringification of huge numbers can be hundreds of times slower than a simple addition (alas, it's an intrinsically complicated operation!); but in native Python code the ratio of times can go to stringification being tens of thousands times slower than a simple addition, so you really want to watch out for that, especially if you decide not to download and install gmpy (for example because you can't: e.g., gmpy is not currently supported on Google App Engine).

Finally, an intermediate case:

$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'x*x'
10 loops, best of 3: 90 msec per loop
$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y*y'
100 loops, best of 3: 5.63 msec per loop
$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y*x'
100 loops, best of 3: 8.4 msec per loop

As you see, multiplying two huge numbers in native Python code can be almost 1000 times slower than the simple addition, while with gmpy the slowdown is less than 100 times (and it's not too bad even if only one if the numbers is already in gmpy's own format so that there's the overhead of converting the other).

Alex Martelli
A: 

As already pointed out, Python can handle numbers as big as your memory will allow. I would just like to add that as the numbers grow bigger, the cost of all operations on the increases. This is not just for printing/converting to string (although that's the slowest), adding two large numbers (larger that what your hardware can natively handle) are is no longer O(1). I'm just mentioning this to point out that although Python neatly hides the details of working with big numbers, you still have to keep in mind that these big numbers operations are not always like those on ordinary ints.

MAK