I know there is something wrong with the following reasoning but I'm not sure what it is.
The FFT:
given two polynomials
A = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n
and
B = b_0 + b_1 x + b_2 x^2 + ... + b_n x^n
you can compute the coefficients of the product
AB = \sum _k = 0 ^ 2n ( \sum _ j = 0 ^ k (a_j b_{k-j}))x^k
in
O(n log n )
time.So given two vectors
(a_0, ..., a_n)
and(b_0, ..., b_n)
we can calculate the vectorv_i = \sum j = 0 ^ k ( a_j b_{k-j})
inO(n log n)
time (by embedding the vectors in zeros.)Given the above, we should be able to calculate the dot product of
A =(a_0, ..., a_n)
andB =(b_0, ..., b_n)
which isA.B = \sum_j=0 ^ n a_j b_j
inO(n log n)
time by preprocessing one of the vectors sayB
to beB' = (b_n, b_{n-1}, ..., b_1, b_0)
then computing the convolution as in 2. inO(n log n)
time.
If the above reasoning is correct, then that means we can implement Matrix Multiplication of two nxn
matrices in O(n^2 log n )
time by computing the dot product in O(n log n)
time O(n)
times.
However, the best run-time for matrix multiplication we know is about O(n^2.4)
so this seems unlikely to be true, which probably means of of the steps 1,2, or 3 is incorrect.