views:

104

answers:

5

I have a program which accepts 2 N-digit numbers, multiplies them using threads & prints the output.

The number of threads created here are 2 * N - 1.

whenever I run the program for N > 151, the program gives me a segmentation fault.

Is there a cap on maximum number of threads a process can get from the thread pool?

If so, could this be a valid reason for the fault?

Edit:

Valgrind finds no memory leaks for N <= 150.

I'm running the program in Linux 2.6.x kernel.

+1  A: 

That would be over 300 threads! Consider the massive overhead from the processor constantly switching between them and prioritizing them, as well the threads from other applications. I think that using threads like that is a disaster waiting to happen, and probably won't help your performance either.

I suspect that their would be a maximum number of threads, considering that it is the CPU's job to manage them. I wouldn't use more that 100 threads, it is very much a bad idea.

Alexander Rafferty
+1  A: 

If under Linux: Check PTHREAD_THREADS_MAX in limits.h . That is the max. allowed thread count per process. And also: this should not be a cause for a seg-fault.

Mario The Spoon
+1  A: 

You question doesn’t specify the operating environment, which is necessary to be able to answer your first question, but if you’re CPU-bound and the number of threads you have exceeds the number of processor cores (2 or 4 on most notebooks), then you’re probably wasting resources.

For the second question, no, it’s not a valid reason for a segmentation fault. Presuming you’re creating this ridiculous number of threads for some good reason that we’re not aware of, double-check your semaphore usage and your resource-allocation results.

danorton
+1  A: 

My Ubuntu box shows a limit of 123858, so I doubt you're running into it with 300, but your pthread_create would return non-zero if you were. Make sure to check the return value.

Compile with -g and run with gdb to debug segmentation faults instead of guessing at the cause. It will point you to the exact line and tell you the exact variable values that caused the crash.

I would also suggestion possible synchronization issues such as missing mutexes, but if that were the cause you would most likely see problems with smaller values of N, although not as frequently.

Karl Bielefeldt
+7  A: 

By default, each thread gets an 8MB stack. 300 threads by 8MB is 2.4GB just for thread stacks - if you're running in 32 bit mode, then that's probably most of your allowed process address space.

You can use pthread_attr_setstacksize() to reduce the size of your thread stacks to something a bit more sane before you create them:

int pthread_attr_setstacksize (pthread_attr_t *__attr, size_t __stacksize)

(Create a new pthread_attr, set the stack size then pass that to pthread_create).

caf
Now that I checked, the pthread_create wouldn't allow for the creation of the 303rd pthread, even though I reset stack size to 80 bytes.
Kedar Soparkar
Are there some other resources size limits that must also be reduced?
Kedar Soparkar
@crypto: If you try to reduce the stack size below `PTHREAD_STACK_MIN` (16384 on Linux), `pthread_attr_setstacksize()` will fail and it will keep using the same size. Try setting it to a decent value, (eg. `65536`).
caf
@crypto: you should always check the return value of the functions you call, including pthread_create() and pthread_attr_setstacksize(). It will save you time on the long run.
ninjalj