views:

147

answers:

2

It appears the general logic for prefetch usage is that prefetch can be added, provided the code is busy in processing until the prefetch instruction completes its operation. But, it seems that if too much of prefetch instructions are used, then it would impact the performance of the system. I find that we need to first have the working code without prefetch instruction. Later we need to various combination of prefetch instruction in various locations of code and do analysis to determine the code locations that could actually improve because of prefetch. Is there any better way to determine the exact locations in which the prefetch instruction should be used ?

+3  A: 

In the majority of cases prefetch instructions are of little or no benefit and can even be counter-productive in some cases. Most modern CPUs have an automatic prefetch mechanism which works well enough that adding software prefetch hints achieves little, or even interferes with automatic prefetch, reducing performance.

In some rare cases, such as when you are streaming large blocks of data on which you are doing very little actual processing, you may manage to hide some latency with software-initiated prefetching, but it's very hard to get it right - you need to start the prefetch several hundred cycles before you are going to be using the data - do it too late and you still get a cache miss, do it too early and your data may get evicted from cache before you are ready to use it. Often this will put the prefetch in some unrelated part of the code, which is bad for modularity and software maintenance. Worse still, if your architecture changes (new CPU, different clock speed, etc), such that DRAM access latency increases or decreases, you may need to move your prefetch instructions to another part of the code to keep them effective.

Anyway, if you feel you really must use prefetch, I recommend #ifdefs around any prefetch instructions so that you can compile your code with and without prefetch and see if it is actually helping (or hindering) performance, e.g.

#ifdef USE_PREFETCH
    // prefetch instruction(s)
#endif

In general though, I would recommend leaving software prefetch on the back burner as a last resort micro-optimisation after you've done all the more productive and obvious stuff.

Paul R
True, there are also cases where lot of prefetch might be bad. That is, suppose if data w,x,y,z are prefetched in the order that is required by software. But, it may happen that z can evict y from cache due to the small size of cache memory and thus y might not be available even though it was prefetched :( Thx for highlighting problems due to prefetch w.r.t change in CPU/Clock Speed as they impact the access(latency) and hence prefetch location.Yeah, the problems related with prefetch(software) use are difficult to be put down. But, how to get the right locations of prefetch in an easy way ?
S.Man
There is no easy way, in my experience, and in most cases the effort is not justified. You can get much more optimisation "bang per buck" from improving your algorithm and its implementation, paying attention to cache usage and memory access patterns, using SIMD, etc.
Paul R
A: 

Sure, you have to experimate a bit, but not that you need to fetch somme houndred cycles (100-300) before the data is needed. The L2 cache is big enougth that the prefetched data can stay there a while.

This prefetching is very efficient in front of a a loop (a few houndred cycles of course), especialy if it is the inner loop and the loop is started thousand and more times per secound.

Also for ur ow fast LL implementation or a Tree-implementation could prefetching gain an measurable advantage because the CPU don't know jet that the data is needed soon.

But remember that the prefetching instruction eat some decoder/queue bandwidth so overusing them hurts performance because of that reason.

Quonux