views:

309

answers:

4

I have found some mentions in another question of matrix addition being a quadratic operation. But I think it is linear.

If I double the size of a matrix, I need to calculate double the additions, not quadruple.

The main diverging point seems to be what is the size of the problem. To me, it's the number of elements in the matrix. Others think it is the number of columns or lines, hence the O(n^2) complexity.

Another problem I have with seeing it as a quadratic operation is that that means adding 3-dimensional matrices is cubic, and adding 4-dimensional matrices is O(n^4), etc, even though all of these problems can be reduced to the problem of adding two vectors, which has an obviously linear solution.

Am I right or wrong? If wrong, why?

+7  A: 

As you already noted, it depends on your definition of the problem size: is it the total number of elements, or the width/height of the matrix. Which ever is correct actually depends on the larger problem of which the matrix addition is part of.

NB: on some hardware (GPU, vector machines, etc) the addition might run faster than expected (even though complexity is still the same, see discussion below), because the hardware can perform multiple additions in one step. For a bounded problem size (like n < 3) it might even be one step.

Adrian
+1 For mentioning the hardware parallelization
Andres
Hardware won't change asymptotic complexity. It may perform more additions at once, but you can't add two matrices with less additions than elements in them. It's a common misconception, especially with multithreading. The fact it's faster doesn't mean it's asymptotic complexity is less.
Martinho Fernandes
This is what I get by thinking too hard too late. Why couldn't both views be correct? I feel so dumb now.
Martinho Fernandes
@martinho you are right re asymptotic complexity, me to thinking too late :) Maybe I can get away with saying that if there is an upper bound to the problem size, the hardware might be able to add short-enough matrix columns in O(1) time
Adrian
I think that logic is kind of cheating.
rlbond
@Adrian: what rlbond said. You still haven't reduced the number of additions. Please don't involve hardware on theoretical subjects.
Martinho Fernandes
In addition to this (very good) answer, consider that if the data being calculated exceeds the size of the memory cache, you're gonna have a bad time.
Norla
I am ready to accept this answer when it stops claiming you can **reduce complexity** with hardware. Stop that! Yes, you can make it faster, but it's still O(N) or O(N^2). Faster O(N) is O(N). O(N/2) = O(N). http://en.wikipedia.org/wiki/Big_o_notation
Martinho Fernandes
@martinho your wish is my command, done.
Adrian
actually with hw parallelization you can decrease the time complexity since you do two or more addition per unit of time; the total number of addition doesn't change however
dfa
So? O(N/2) = O(N).
Martinho Fernandes
yes, constants is not considered in big-O notation; however you tend to confuse time-complexity with number-of-ops-complexity
dfa
http://en.wikipedia.org/wiki/Analysis_of_algorithms 5th paragraph.
Martinho Fernandes
+3  A: 

It's O(M*N) for a 2-dimensional matrix with M rows and N columns.

Or you can say it's O(L) where L is the total number of elements.

rlbond
+1  A: 

think of the general case implementation:

for 1 : n
 for 1 : m
   c[i][j] = a[i][j] + b[i][j]

if we take the simple square matrix, that is n x n additions

BioBuckyBall
+1  A: 

Usually the problem is defined using square matrices "of size N", meaning NxN. By that definition, matrix addition is an O(N^2) since you must visit each of the NxN elements exactly once.

By that same definition, matrix multiplication (using square NxN matrices) is O(N^3) because you need to visit N elements in each of the source matrices to compute each of the NxN elements in the product matrix.

Generally, all matrix operations have a lower bound of O(N^2) simply because you must visit each element at least once to compute anything involving the whole matrix.

Drew Hall
All operations in *dense* matrices, sparse matrix operations are bounded by the non-zero entries.
Adrian
@Adrian: Touche...was not thinking about sparse/banded/otherwise structured matrices here.
Drew Hall