views:

289

answers:

1

I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me.

Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples.

I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes).

The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

In addition, I am posting some of the code I have:

decompose by feature (works, but slow):

__kernel void __ccv_cl_pos_error_rate(__global unsigned int* err_rate,
__constant int* feature, __constant int* data, int num, __constant
unsigned int* w, int s, int isiz0, int isiz01, int step0, int step1)
{
    int igrid = get_global_id(0);
    __constant int* of = feature + igrid * 30;
    unsigned int e = 0;
    int k, i;
    int step[] = { step0, step1 };
    for (k = 0; k < num; k++)
    {
        __constant int* kd = data + k * isiz01;
        int pmin = kd[of[0] * isiz0 + of[1] + of[2] * step[of[0]]];
        int nmax = kd[of[3] * isiz0 + of[4] + of[5] * step[of[3]]];
        for (i = 0; i < 5; i++)
        {
            if (of[i * 6] >= 0)
                pmin = min(pmin, kd[of[i * 6] * isiz0 + of[i * 6 + 1] + of[i * 6 + 2] * step[of[i * 6]]]);
            if (of[i * 6 + 3] >= 0)
                nmax = max(nmax, kd[of[i * 6 + 3] * isiz0 + of[i * 6 + 4] + of[i * 6 + 5] * step[of[i * 6 + 3]]]);
        }
        if (pmin <= nmax)
            e += w[s + k];
    }
    err_rate[igrid] += e;
}

decompose by sample, not work:


__kernel void __ccv_cl_pos_error_rate(__global unsigned int* err_rate,
__constant int* feature, __constant int* data, int num, __constant
unsigned int* w, int s, int isiz0, int isiz01, int step0, int step1,
__local int* shared)
{
    int igrid = get_global_id(0);
    int lsize = get_local_size(0);
    int lid = get_local_id(0);
    unsigned int e = 0;
    int k, i;
    int ws = w[s + igrid];
    int step[] = { step0, step1 };
    for (k = 0; k < isiz01; k += lsize)
        if (k + lid < isiz01)
            shared[k + lid] = data[igrid * isiz01 + k + lid];
    barrier(....);
    for (k = 0; k < num; k++)
    {
        __constant int* of = feature + k * 30;
        int pmin = shared[of[0] * isiz0 + of[1] + of[2] * step[of[0]]];
        int nmax = shared[of[3] * isiz0 + of[4] + of[5] * step[of[3]]];
        for (i = 0; i < 5; i++)
        {
            if (of[i * 6] >= 0)
                pmin = min(pmin, shared[of[i * 6] * isiz0 + of[i * 6 + 1] + of[i * 6 + 2] * step[of[i * 6]]]);
            if (of[i * 6 + 3] >= 0)
                nmax = max(nmax, shared[of[i * 6 + 3] * isiz0 + of[i * 6 + 4] + of[i * 6 + 5] * step[of[i * 6 + 3]]]);
        }
        if (pmin <= nmax)
            err_rate[k] += ws; // here is wrong.
    }
    barrier(....);
}
+1  A: 

hi, andrew cooke from hn here. from your first attempt i now understand the problem better, and see that having choice of sample depend on feature is what is killing you there.

is the selection of sample by feature completely random, or can you exploit regularities in that (ordering features so that those that use the same samples are processed together)? this is obvious, so i guess it is not possible.

unfortunately, i do not understand your second attempt.

andrew cooke
yes, never mind, my second attempt is flawed. Essentially, I am trying to load one sample per work-group into local memory and do calculation, but it just messed up. I will try to clean it, get some benchmark and post it again. Thanks.
liuliu
maybe this is something like what you are trying to do (it is much more complex than anything i have tried):(1) for some size subset of features in a workgroup...(2) create a list, in local memory of all indices into the samples;(3) order those indices;(4) load data from samples (replacing indices with values?);(5) complete calculation using local samples.unfortunately, i *think* that your problem is too sparse for that to help (that features don't have enough samples in common for the restricted size of the local memory). but i thought i'd describe it "just in case"...
andrew cooke
ps if it is that sparse, and if all your data fits in the l3 cache of a multicore CPU then that's probably your sweet spot until GPUs have similar size caches...
andrew cooke
just an update: the feature decomposition didn't work out. The performance is hammered and the atom_add is buggy (guess because the __constant is already cached, caching it explicitly to local doesn't help a lot). However, with more optimizations based on the first attempt (I didn't employed the full CPU-used optimization technique for feature comparing in the GPU kernel in the first attempt), I now see a 75% speed up comparing to 8-thread i7 860 CPU version (on HD 5770 GPU). still work on it.
liuliu
sweet. good luck...
andrew cooke