First off, I'll point out that concurrent programming is not necessarily synonymous with parallel programming. Concurrent programming is about constructing applications from loosely-coupled tasks. For instance, a dialog window could have interactions with each control implemented as a separate task. Parallel programming, on the other hand, is explicitly about spreading the solution of some computational task across more than a single piece of execution hardware, essentially always for performance reasons of some sort (note: even too little RAM is a performance reason when the alternative is swapping.
So, I have to ask in return: What books are you referring to? Are they about concurrent programming (I have a few of these, there's a lot of interesting theory there), or about parallel programming?
If they really are about parallel programming, I'll make a few observations:
- CUDA is a rapidly moving target, and has been since its release. A book written about it today would be half-obsolete by the time it made it into print.
- OpenCL's standard was released just under a year ago. Stable implementations came out over the last 8 months or so. There's simply not been enough time to get a book written yet, let alone revised and published.
- OpenMP is covered in at least a few of the parallel programming textbooks that I've used. Up to version 2 (v3 was just released), it was essentially all about data parallel programming.