tags:

views:

213

answers:

2

Whats the relationship between max work group size and warp size? lets say my device has 240 cuda streaming processors(SP) and returns the following info -

CL_DEVICE_MAX_COMPUTE_UNITS: 30

CL_DEVICE_MAX_WORK_ITEM_SIZES: 512 / 512 / 64

CL_DEVICE_MAX_WORK_GROUP_SIZE: 512

CL_NV_DEVICE_WARP_SIZE: 32

this means it has 8 SPs per Streaming-Multiprocessor(ie. compute unit). Now how is warp size = 32 related to these numbers?

A: 

The warp size is the number of threads that a Multiprocessor executes concurrently. A NVIDIA Multiprocessor can execute several threads from the same block at the same time, using hardware multithreading.

It's important to consider the warp size, since all memory access are coalesced into multiples of the warp size (32 bytes, 64 bytes, 128 bytes), this improves performance.

The CUDA C Best Practices Guide contains all the technical information about this kind of optimizations.

Matias Valdenegro
+2  A: 

Direct Answer: Warp size is the number of threads in a warp, which is a sub-division use in the hardware implementation to coalesce memory access and instruction dispatch.

Suggested Reading:

As @Matias mentioned, I'd go read the CUDA C Best Practices Guide (you'll have to scroll to the bottom where it's listed). It might help for you to stare at the table in Appendix G.1 on page 164.

Explanation:

CUDA is language which provides parallelism at two levels. You have threads and you have blocks of threads. This is most evident when you execute a kernel, you need to specify the size of each thread block and the number of thread blocks in between the <<< >>> which precede the kernel parameters.

What CUDA Doesn't tell you is things are actually happening at four levels, not two. In the background, your block of threads is actually divided into sub-blocks called "warps". Here's a brief metaphor to help explain what's really going on:

Brief Metaphor:

Pretend you're an educator/researcher/politician who's interested in the current mathematical ability of high school seniors. Your plan is to give a test to 10,240 students, but you can't just put them all in a football stadium or something and give them the test. It is easiest to subdivide (parallelize) your data collection -- so you go to twenty different high school and ask that 512 of their seniors each take the math test.

You collect your data and that is all you care about. What you didn't know (and didn't really care about) is that each school is actually subdivided into classrooms. So your 512 seniors are actually divided into 16 groups of 32. And further, none of these schools really has the resources required -- each classroom only has sixteen calculators. Hence, at any one time only half of each classroom can take your math test.

I could go on to stretch silly rules like only eight classrooms in any one school can take the test at one time because they only have eight teachers. You can't sample more than 30 schools simultaneously because you only have 30 proctors...

Back to your question:

Using the metaphor, your program wants to compute results as fast as possible (you want to collect math tests). You issue a kernel with a certain number of blocks (schools) each of which has a certain number of threads (students). You can only have so many blocks running at one time (collecting your survey responses requires one proctor per school). In CUDA, thread blocks run on a Symmetric Multiprocessor (SM). The variable: CL_DEVICE_MAX_COMPUTE_UNITS tells you how many SMs, 30, a specific card has. This varies drastically based on the hardware -- check out the table in Appendix A of the CUDA C Best Practices Guide. Note that each SM can run can only work on 8 blocks simultaneouly regardless of the compute capability (1.X or 2.X).

Thread blocks have maximum dimensions: CL_DEVICE_MAX_WORK_ITEM_SIZES Think of laying out your threads in a grid, you can't have a row with more than 512 threads. You can't have a column with more than 512 threads. And you can't stack more than 64 threads high. Next, there is a maximum: CL_DEVICE_MAX_WORK_GROUP_SIZE number of threads, 512, that can be grouped together in a block. So your thread blocks dimensions could be:

512 x 1 x 1

1 x 512 x 1

4 x 2 x 64

64 x 8 x 1

etc...

Note that as of Compute Capability 2.X, your blocks can have at most 1024 threads. Lastly, the variable CL_NV_DEVICE_WARP_SIZE specifies the warp size, 32 (Number of students per classroom). In Compute Capability 1.X devices, memory transfers and instruction dispatch occur at the Half-Warp granularity (you only have 16 calculators per classroom). In Compute Capability 2.0, memory transfers are grouped by Warp, so 32 fetches simultaneously, but instruction dispatch is still only grouped by Half-Warp. For Compute Capability 2.1, both memory transfers and instruction dispatch occur by Warp, 32 threads. These things can and will change in future hardware.

So, my word! Let's get to the point:

In Summary:

I have described the nuances of warp/thread layout and other such shtuff, but here are a couple of things to keep in mind. First, your memory access should be "groupable" in sets of 16 or 32. So keep the X dimension of your blocks a multiple of 32. Second, and most important to get the most from a specific gpu, you need to maximize occupancy. Don't have 5 blocks of 512 threads. And don't have 1,000 blocks of 10 threads. I would strongly recommend checking out the Excel-based spreadsheet (works in open office too ??I think??) which will tell you what the gpu occupancy will be for a specific kernel call (thread layout and shared memory requirements). Hope this explanation helps!

M. Tibbits
Very nice answer and metaphor. Just want to add that AMD has similar notion called Wavefront, and is currently 64 threads/wavefront.
Stringer Bell
Huh. I didn't know that. I haven't spent much time looking at the AMD offerings. Do you have an idea if this will change dramatically with the Fusion offerings?
M. Tibbits
Future fusion parts are all based on the Evergreen architecture, so wavefront should stay 64 threads: http://www.highperformancegraphics.org/media/Hot3D/HPG2010_Hot3D_AMD.pdf
Stringer Bell