tags:

views:

39

answers:

1

I am trying to write a simple matrixMultiplication application that multiplies two square matrices using CUDA. I am having a problem where my kernel is only computing correctly in block (0,0) of the grid.

This is my invocation code:

dim3 dimBlock(4,4,1);
dim3 dimGrid(4,4,1);
//Launch the kernel;
MatrixMulKernel<<<dimGrid,dimBlock>>>(Md,Nd,Pd,Width);

This is my Kernel function

__global__ void MatrixMulKernel(int* Md, int* Nd, int* Pd, int Width)
{
        const int tx = threadIdx.x; 
        const int ty = threadIdx.y;
        const int bx = blockIdx.x;
        const int by = blockIdx.y;
        const int row = (by * blockDim.y + ty);
        const int col = (bx * blockDim.x + tx);

        //Pvalue stores the Pd element that is computed by the thread
        int Pvalue = 0;

        for (int k = 0; k < Width; k++)
        {
            Pvalue += Md[row * Width + k] * Nd[k * Width + col];
        }
        __syncthreads();
        //Write the matrix to device memory each thread writes one element
        Pd[row * Width + col] = Pvalue;

    }

I think the problem may have something to do with memory but I'm a bit lost. What should I do to make this code work across several blocks?

+1  A: 

The problem was with my CUDA kernel invocation. The grid was far too small for the matrices being processed.

ZeroDivide