What Big-O complexity have most widespread algorithms for the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product? Do exist some exotic algorithms which are the most effective in terms of Big-O complexity but for some reasons not very widespread in practical solutions (e.g. not implemented in popular software libraries)?
Operations don't have complexity, algorithms do. For example, there are various square root algorithms, and they will have different complexity.
You'll consider most simple operations as O(1) because your input size is usually fixed (i.e. 32- or 64-bits).
Under normal conditions, your platform will be performing exactly the same operation for multiplication, square root, logarithm etc. regardless of the "size" of your input (i.e. int a = 0; and int b = Int32.MaxValue are both 32-bit integers).
It gets interesting once you start to look at matrices or representing arbitrary-precision numbers, but someone already linked the wikipedia summary, so I won't go into that.
Just don't use Schönhage–Strassen to multiply "normal" little numbers. It'll make me cry. Just because an algorithm is O(n2) doesn't mean it's bad - particularly when n is almost always 25 or 26.
Square root and logarithm may be implemented in various ways, greatly affecting the complexity (judged by the required precision).
If they are implemented with lookup tables (and some kind of interpolation), the memory requirement really explodes as more precision is required but the complexity is that of looking up the value in the array and possibly applying the interpolation.
More popularly they seem to be implemented by their series definitions. Recurse or iterate a statement for a number of rounds until you reach the required precision. Here the number of rounds may get very high as more precision is needed, and also the calculations itself are affected by the increased precision.
There's a Fourier type algorithm that does integer multiplication as well (Schonhage-Strassen)
I thought that there was a version of Strassen's algorithm that does slightly better than normal for integer multiplication, but now that I think of it, that one ends up being the same as straightforward...
Addition and subtraction are pretty much, just addition and subtraction. Division and square root are probably interesting though...
ALSO: Note that so far everyone has talked about INTEGER arithmetic. Once you get to floats/doubles all bets are off. Then you get into the world of numerical analysis, and that's a whole field of it's own...
Take a look at BigInteger, on arbitrary-length integers. Everything now has a cost, in terms of the size of the input, which is the number of bits (typically O(log K)
bits for a number K
). I will use N
for the number of bits below.
For example, addition and subtraction is now O( N )
. Multiplication is either O( N^2 )
(naive) or O( n (log n)^(2+epsilon) )
with FFT.
Other algorithms include the "power" function, which takes O( N )
multiplication. (except now each multiplication has a cost!)
And there are additional complexities for BigDecimals, which is the arbitrary-length decimal equivalent, and beyond some of the more basic operations, some of the things are more interesting as well (especially if you want to figure out how much precision you want). You can take a look at Java's implementation.