views:

95

answers:

2

I know there is something wrong with the following reasoning but I'm not sure what it is.

The FFT:

  1. given two polynomials

    A = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n

    and

    B = b_0 + b_1 x + b_2 x^2 + ... + b_n x^n

    you can compute the coefficients of the product

    AB = \sum _k = 0 ^ 2n ( \sum _ j = 0 ^ k (a_j b_{k-j}))x^k

    in O(n log n ) time.

  2. So given two vectors (a_0, ..., a_n) and (b_0, ..., b_n) we can calculate the vector v_i = \sum j = 0 ^ k ( a_j b_{k-j}) in O(n log n) time (by embedding the vectors in zeros.)

  3. Given the above, we should be able to calculate the dot product of A =(a_0, ..., a_n) and B =(b_0, ..., b_n) which is A.B = \sum_j=0 ^ n a_j b_j in O(n log n) time by preprocessing one of the vectors say B to be B' = (b_n, b_{n-1}, ..., b_1, b_0) then computing the convolution as in 2. in O(n log n) time.

If the above reasoning is correct, then that means we can implement Matrix Multiplication of two nxn matrices in O(n^2 log n ) time by computing the dot product in O(n log n) time O(n) times.

However, the best run-time for matrix multiplication we know is about O(n^2.4) so this seems unlikely to be true, which probably means of of the steps 1,2, or 3 is incorrect.

+4  A: 

There are n^2 entries in the product, not n and so this algorithm would be O(n^2 * n * log n) = O(n^3 log n).

And the best algorithm for computing dot products is O(n), not O(n log n). That's why the naive algorithm for matrix multiplication is O(n^3). (It's n^2 dot products that can be done in O(n) time).

Jason
boy I feel stupid lol
ldog
@gmatt: You shouldn't. Most people don't even reach a point in their life where they are studying the FFT. Just consider it a stumbling block on your path towards enlightenment.
Jason
+2  A: 

I wonder why it is an achievement that in step 3 you can calculate the dot product in O(n log n) time as it is well known that dot product can be calculated in O(n) time? Also step 2 looks like linear time instead of O(n log n) step?

Also the conclusion about O(n^2 log n) does not follow logically. You can't take the dot product O(n) times resulting in the AB matrix---as far as I know you have to take O(n^2) dot products, leading to (per your analysis) O(n^3 log n) which is inferior to standard O(n^3). This is because of your strange O(n log n) dot product result.

antti.huima