views:

683

answers:

6

What Big-O complexity have most widespread algorithms for the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product? Do exist some exotic algorithms which are the most effective in terms of Big-O complexity but for some reasons not very widespread in practical solutions (e.g. not implemented in popular software libraries)?

+6  A: 

Operations don't have complexity, algorithms do. For example, there are various square root algorithms, and they will have different complexity.

Skilldrick
Multiplication is an algorithm between two arrays of bits. With complexity O(n²), if I am not mistaken.
Tronic
@Skilldrick: OP is talking about the most commonly used algorithms, so in a sense this answer is irrelevant.
Moron
@Moron: I think the question has been slightly edited since I answered.
Skilldrick
@Tronic: Using FFT multiplication takes only O(n logn)
Ritsaert Hornstra
@Skilldrick: I was guessing that is what might have happened, but strangely I don't see any edits! It is now irrelevant after the edit, and in any case, it was always more suited as a comment than as an answer (IMO).
Moron
@Moron: There's a 5-minute period after asking a question in which edits don't show up. I only just discovered that. :P I stand by my answer :)
Skilldrick
@Skilldrick: I havent edited the body of my question yet :)
psihodelia
@psihodelia: Fine, I must just be going blind then :P At least you've got some answers that actually answer your question :)
Skilldrick
FFT multiplication is not only `O(n log n)`. Well, maybe if `n` is the number, but in those terms, `n` is typically the number of bits.
Larry
@Skilldrick: I see you don't have a Disciplined badge yet ;-)
Moron
@Moron: I was thinking that just a few minutes ago :) But we'd lose all this interesting discussion...
Skilldrick
@Skilldrick: operations (or anyway function and decision problems) do have complexity characteristics. Usually when someone says "PRIMES is soft-O(N^6)", they mean "there exists a soft-O(N^6) algorithm for PRIMES". N being the number of digits in this case. I don't think there's too much risk of ambiguity, confusion and errors-of-type if we say things like "multiplication is polynomial-time" instead of "multiplication can be done with a polynomial-time algorithm".
Steve Jessop
+12  A: 
KennyTM
It is interesting, how faster is O(N<sup>2.38</sup>) than O(N³).
psihodelia
Unfortunately, the answer to that is "it depends". As the wikipedia article mentions, the algorithm isn't widely used because it only actually results in a reduction in runtime for impractically huge inputs.
Iain Galloway
In practice, the O(N^2.38) algorithm isn't any faster at all, because there's more to speed than algorithmic complexity.
Pillsy
Same deal with the "fast" methods for multiplication. O(n-squared) isn't very big when n is like 2^8. Schönhage–Strassen isn't even worth thinking about until you've got really huge numbers in the first place (wikipedia suggests 2^15 - 2^17 digits before you start to get a payoff).
Iain Galloway
+1 Good answer.
Skilldrick
@KennyTM: What is a "Big-int"?
Lazer
@eSKay: Arbitrary-precision integer.
KennyTM
@KennyTM: thanks!
Lazer
+4  A: 

You'll consider most simple operations as O(1) because your input size is usually fixed (i.e. 32- or 64-bits).

Under normal conditions, your platform will be performing exactly the same operation for multiplication, square root, logarithm etc. regardless of the "size" of your input (i.e. int a = 0; and int b = Int32.MaxValue are both 32-bit integers).

It gets interesting once you start to look at matrices or representing arbitrary-precision numbers, but someone already linked the wikipedia summary, so I won't go into that.

Just don't use Schönhage–Strassen to multiply "normal" little numbers. It'll make me cry. Just because an algorithm is O(n2) doesn't mean it's bad - particularly when n is almost always 25 or 26.

Iain Galloway
That's a very good point on operations on 32-bit ints.
Skilldrick
In practice you're right about the simple operations. However, as a theorist, I want to say that you can still talk about the complexity in terms of the number of bits in the number. The same algorithm way be fine for 32b, but gets slow for 64b, or when we eventually get to 1024b or something.... Again, realistically you're right, but it's still an interesting question.
Brian Postow
*nods*, it's absolutely an interesting question as soon as you step out of the safe world of fixed-length inputs.
Iain Galloway
+1  A: 

Square root and logarithm may be implemented in various ways, greatly affecting the complexity (judged by the required precision).

If they are implemented with lookup tables (and some kind of interpolation), the memory requirement really explodes as more precision is required but the complexity is that of looking up the value in the array and possibly applying the interpolation.

More popularly they seem to be implemented by their series definitions. Recurse or iterate a statement for a number of rounds until you reach the required precision. Here the number of rounds may get very high as more precision is needed, and also the calculations itself are affected by the increased precision.

Tronic
+1 Interesting. Does that mean that N becomes the precision?
Skilldrick
@skilldrick you can definitely do it that way. There are algorithms that are measured in the size of their OUTPUT instead of their input. They have a name, but It's friday, so I can't be bothered to remember it B-)
Brian Postow
A: 

There's a Fourier type algorithm that does integer multiplication as well (Schonhage-Strassen)

I thought that there was a version of Strassen's algorithm that does slightly better than normal for integer multiplication, but now that I think of it, that one ends up being the same as straightforward...

Addition and subtraction are pretty much, just addition and subtraction. Division and square root are probably interesting though...

ALSO: Note that so far everyone has talked about INTEGER arithmetic. Once you get to floats/doubles all bets are off. Then you get into the world of numerical analysis, and that's a whole field of it's own...

Brian Postow
+1  A: 

Take a look at BigInteger, on arbitrary-length integers. Everything now has a cost, in terms of the size of the input, which is the number of bits (typically O(log K) bits for a number K). I will use N for the number of bits below.

For example, addition and subtraction is now O( N ). Multiplication is either O( N^2 ) (naive) or O( n (log n)^(2+epsilon) ) with FFT.

Other algorithms include the "power" function, which takes O( N ) multiplication. (except now each multiplication has a cost!)

And there are additional complexities for BigDecimals, which is the arbitrary-length decimal equivalent, and beyond some of the more basic operations, some of the things are more interesting as well (especially if you want to figure out how much precision you want). You can take a look at Java's implementation.

Larry
A naive implementation of the power function requires O(n) multiplications, but smart implementations are O(log n): http://en.wikipedia.org/wiki/Exponentiation_by_squaring
Juliet
As I mentioned, `N` is the number of bits. Powering is indeed `O(log K)` for some number `K` though.
Larry