views:

273

answers:

3

What complexity are the methods multiply, divide and pow in BigInteger currently? There is no mention of the computational complexity in the documentation (nor anywhere else).

+2  A: 

If you look at the code for BigInteger (provided with JDK), it appears to me that multiply(..) has O(n^2) (actually the method is multiplyToLen(..)). The code for the other methods is a bit more complex, but you can see yourself.

Note: this is for Java 6. I assume it won't differ in Java 7.

Bozho
There are several complexities of multiplication: http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations ... can you tell O(n^2) apart from O(n^1.585) or O(n^1.465)?
Joey
I believe there have been changes in Java 7. I can't remember the details I found searching, but they were scarce.
Rössel: There _exist_ other algorithms for multiplication, but Java 6 doesn't use them. When multiplying large numbers you'd certainly notice the difference between the schoolbook algorithm and Karatsuba multiplication. The others are less of a jump unless you're filling up primary memory with the numbers.
Charles
+2  A: 

Measure it. Do operations with linearly increasing operands and draw the times on a diagram. Don't forget to warm up the JVM (several runs) to get valid benchmark results.

If operations are linear O(n), quadratic O(n^2), polynomial or exponential should be obvious.

EDIT: While you can give algorithms theoretical bounds, they may not be such useful in practice. First of all, the complexity does not give the factor. Some linear or subquadratic algorithms are simply not useful because they are eating so much time and resources that they are not adequate for the problem on hand (e.g. Coppersmith-Winograd matrix multiplication). Then your computation may have all kludges you can only detect by experiment. There are preparing algorithms which do nothing to solve the problem but to speed up the real solver (matrix conditioning). There are suboptimal implementations. With longer lengths, your speed may drop dramatically (cache missing, memory moving etc.). So for practical purposes, I advise to do experimentation.

The best thing is to double each time the length of the input and compare the times. And yes, you do find out if an algorithm has n^1.5 or n^1.8 complexity. Simply quadruple the input length and you need only the half time for 1.5 instead of 2. You get again nearly half the time for 1.8 if you multiply the length 256 times.

Thorsten S.
That might work. I would need to test large values of n. If I measured the time to multiply two n-bit BigIntegers (t_0) and then two 2n-bit BigIntegers (t_1). Then I might expect the complexity to be O(n^(log2(t_1/t_0))). In general I am a little skeptical of empirical methods though (possibly unfairly).
This is a difficult approach to take, though. _A priori_, there's no reason to think that a single algorithm is used rather than a combination of algorithms. Thus the scaling from 10 digits to 1000 digits might be different from the scaling from 1000 digits to 3000 digits.
Charles
A: 

There is a new "better" BigInteger class that is not being used by the sun jdk for conservateism and lack of useful regression tests (huge data sets). The guy that did the better algorithms might have discussed the old BigInteger in the comments.

Here you go http://futureboy.us/temp/BigInteger.java

i30817