I want to multiply two matrices but the triple loop has O(n3) complexity. Is there any algorithm in dynamic programming to multiply two matrices with O(n) complexity?
Short answer: No
Long answer: There are ways if you have special kinds of matricies (for instance a diagonal matrix). The better matrix multiplication algorithms out there can pare you down to something like O(n2.4) (http://en.wikipedia.org/wiki/Coppersmith-Winograd_algorithm). The major one I am somewhat familiar with uses a divide and conquer algorithm to split up the workload (not the one I linked to).
I hope this helps!
No! I don't think so.
There is no way unless and until you are using a parallel processing machine. That too, it has its own dependencies and limitations.
Till now, its not yet achieved.
The best Matrix Multiplication Algorithm known so far is the "Coppersmith-Winograd algorithm" with O(n2.38 ) complexity but it is not used for practical purposes.
However you can always use "Strassen's algorithm" which has O(n2.81 ) complexity but there is no such known algorithm for matrix multiplication with O(n) complexity.
There is a theoretical lower bound for matrix multiplication at O(n^2) as you have to touch that many memory locations to do the multiplication. As others have said, there are algorithms that drop us below O(n^3), but are usually impractical in real use.
If you need to speed it up, you might want to look at Cache Oblivious Algorithms, such as this one (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.44.5650) that accelerate performance by performing operations in a cache cohesive way, ensuring that data is in the cache when needed.
If the matrices are known to be diagonal, you can multiply them in O(N)
operations. But in general, you cannot.
Matrices have O(n2) elements, and every element must be considered at least once for the result, so there is no possible way for a matrix multiplication algorithm to run in less than O(n2) operations.
If you have n²
processors and shared-read memory architecture, you could multiply two matrices in O(n)
time... but this is only theory for now.
If
- your matrices are large
- they have a lot of zeroes
- you are willing to store them in strange formats
you can devise algorithms whose complexities depend only on the number of non-zero elements. This can be mandatory for some problems (eg. finite element methods).