Using your example of computing both the sum and difference, the best performance is probably going to be achieved if you compute both at the same time (i.e. in the same kernel).
Assuming this is not possible for some reason, then if your arrays are very large then the best performance may be to use the whole GPU (i.e. multiple multiprocessors) to compute the result in which case it doesn't matter too much that you do one after the other.
For both of the above cases I would strongly recommend you check out the reduction sample in the SDK which walks you through a naive implementation up to a pretty quick version with good documentation.
Having said all of that, if the amount of work is sufficiently small that you would not be fully utilising the whole GPU for one of your computations then there are two ways to run different computations on different multiprocessors:
- Use "Concurrent Kernels", where multiple kernels run on the same GPU at the same time. See the CUDA Programming Guide for more information and check out the concurrentKernels sample in the SDK, in essence you manual tell the scheduler that there is no dependency between the two (by using CUDA streams) which allows thenm to be executed simultaneously.
- Have a switch on the blockIdx to select which code to execute.
The first of these is far more preferable if your hardware supports it (you will need Compute Capability 2.0 or greater) since it is far simpler to read and maintain.