views:

69

answers:

1

Or decide between a parallel and a sequential operation in general. It is hard to know without testing whether parallel or sequential implementation is best due to overhead. Obviously it will take some time to train "the decider" which method to use. I would say that this method cannot be perfect, so it is probabilistic in nature. The x,y,z do influence "the decider". I think a very naive implementation would be to give both 1/2 chance at the beginning and then start favoring them according to past performance. This disregards x,y,z, Anyhow, please share your heuristic, your experience if any, your tips on this.

Sample code:

public interface IComputer {
    decimal Compute(decimal x, decimal y, decimal z);
}

public class SequentialComputer : IComputer {
    public decimal Compute( ... // sequential implementation
}

public class ParallelComputer : IComputer {
    public decimal Compute( ... // parallel implementation
}

public class HybridComputer  : IComputer {
    private SequentialComputer sc;
    private ParallelComputer pc;
    private TheDecider td;  // Helps to decide between the two.

    public HybridComputer() {
        sc = new SequentialComputer();
        pc = new ParallelComputer();
        td = TheDecider();
    }

    public decimal Compute(decimal x, decimal y, decimal z) {
        decimal result;
        decimal time;
        if (td.PickOneOfTwo() == 0) {
            // Time this and save result into time.
            result = sc.Compute(...);
        } else {
            // Time this and save result into time.
            result = pc.Compute();
        }
        td.Train(time);
        return result;
    }
}
+1  A: 

I would remove the computer specialisations and use WithDegreeOfParallelism on your PLINQ code. Just have your decider return 1 if it has learnt that no parallelism is optimal.

Gary