Python is pretty good on its own, but better with gmpy
(which bridges it to the GMP library others have mentioned, or alternately to the MPIR kinda-work-alike one [[work in progress;-)]]). Consider:
$ python -mtimeit -s'x=int("1"*9999); y=int("2"*9999)' 'x*y'
100 loops, best of 3: 6.46 msec per loop
i.e., in pure Python, multiply two 10K-digits ints takes 6.5 milliseconds or so. And...:
$ python -mtimeit -s'from gmpy import mpz; x=mpz("1"*9999); y=mpz("2"*9999)' 'x*y'
1000 loops, best of 3: 326 usec per loop
...with gmpy at hand, the operation will be about 20 times faster. If you have hundreds rather than thousands of digits, it's even more extreme:
$ python -mtimeit -s'x=int("1"*199999); y=int("2"*199999)' 'x*y'
10 loops, best of 3: 675 msec per loop
vs
$ python -mtimeit -s'from gmpy import mpz; x=mpz("1"*199999); y=mpz("2"*199999)' 'x*y'
100 loops, best of 3: 17.8 msec per loop
so, with 200k digits instead of just 10k, gmpy's speed advantage is 38 times or so.
If you routinely need to handle integers of this magnitude, Python + gmpy is really a workable solution (of course I'm biased, since I did author and care for gmpy over the last few years exactly because I ♥ Python (hey, my license plate is P♥thon!-) and in one of my hobby (combinatorial arithmetic) I do have to deal with such numbers pretty often;-).