views:

76

answers:

5

How do you tell the thread scheduler in linux to not interrupt your thread for any reason? I am programming in user mode. Does simply locking a mutex acomplish this? I want to prevent other threads in my process from being scheduled when a certain function is executing. They would block and I would be wasting cpu cycles with context switches. I want any thread executing the function to be able to finish executing without interruption even if the threads' timeslice is exceeded.

+1  A: 

You can't. If you could what would prevent your thread from never releasing the request and starving other threads.

The best you can do is set your threads priority so that the scheduler will prefer it over lower priority threads.

R Samuel Klatchko
Dang. Thread priority is a problem unless I can change it when I enter the function and then lower when I exit. Also, what would that cost in terms of cycles. The worker threads are the ones I am worrying about and there will be a lot of them.
johnnycrash
+1  A: 

Why not simply let the competing threads block, then the scheduler will have nothing left to schedule but your living thread? Why complicate the design second guessing the scheduler?

Will Hartung
Well I was thinking that if thread A locked a resource towards the end of thread A's timeslice then it could be preempted. The scheduler would then cycle through all the other worker threads, lets say there are 50. Since this function is highly likely to be hit, each of the 50 threads might execute for a short time, then block. So I figure I just had 50 context switches because of the preemptive scheduler. Needless waste.
johnnycrash
A: 

You should architect your sw so you're not dependent on the scheduler doing the "right" thing from your app's point of view. The scheduler is complicated. It will do what it thinks is best.

Context switches are cheap. You say

I would be wasting cpu cycles with context switches.

but you should not look at it that way. Use the multi-threaded machinery of mutexes and blocked / waiting processes. The machinery is there for you to use...

Larry K
Yeah, I agree. I don't want to reinvent the wheel. I just want to know how to use the wheel to the max. I read somewhere that a mutex can be locked and unlocked on the order of thousands of times a second. Thats too slow for what I need. You are right about architecting. Unfortunately I have 20 yr old code, so my 1st plan was to try something that worked with minimal changes. Hence questions about preemption. Also, it uses memmap a lot. My guess is memmap calls malloc at 4k chunks. I might have 64 simultanous memmaps going on all wanting to malloc 50 times each.
johnnycrash
Well, mmap doesn't call malloc, it'll fault-in memory pages in the kernel. Also, how many thousands/sec is too slow ? Locking/unlocking a mutex with no contention can be done in the order of many millions/sec, when there's contention and threads have to block, it gets slower though.
nos
+1  A: 

Look into real time scheduling under Linux. I've never done it, but if you indeed do NEED this this is as close as you can get in user application code.

What you seem to be scared of isn't really that big of a deal though. You can't stop the kernel from interrupting your programs for real interrupts or of a higher priority task wants to run, but with regular scheduling the kernel does uses it's own computed priority value which pretty much handles most of what you are worried about. If thread A is holding resource X exclusively (X could be a lock) and thread B is waiting on resource X to become available then A's effective priority will be at least as high as B's priority. It also takes into account if a process is using up lots of cpu or if it is spending lots of time sleeping to compute the priority. Of course, the nice value goes in there too.

nategoose
Thanks. I'm not worried about the kernel preempting me, just other threads in my process. So those kind of interrupts would be ok. An interrupt to let another thread in my process run is what I am worried about.
johnnycrash
The words you are using don't seem to match what you want, and I'm not sure that what you want is what you really want. You may be wanting cooperative multithreading or you may be wanting something else.Do you mean that while thread A holds lock X then no other thread that wants lock X can execute, because that's pretty much what all of the thread synchronization things are for. You just have to design your code correctly; calling and releasing the locks at the right time/place.If you mean that when thread A holds lock X then no other thread can execute at all you should look into SIGSTOP
nategoose
+1  A: 

How do you tell the thread scheduler in linux to not interrupt your thread for any reason?

Can't really be done, you need a real time system for that. The closes thing you'll get with linux is to set the scheduling policy to a realtime scheduler, e.g. SCHED_FIFO, and also set the PTHREAD_EXPLICIT_SCHED attribute. See e.g. here , even now though, e.g. irq handlers and other other stuff will interrupt your thread and run.

However, if you only care about the threads in your own process not being able to do anything, then yes, having them block on a mutex your running thread holds is sufficient.

The hard part is to coordinate all the other threads to grab that mutex whenever your thread needs to do its thing.

nos
basicly its a bunch of worker threads and a work queue. Right now I have almost no locking since I prepare the work and then fire the threads. For phase 2 I want to increase the paralellism by having the threads add more work to the queue as they go. To do this, the threads have to check a common data store to see if the data exists. Nonexistant data = the work for the queue. It is this function that checks the common data store that I fear would be accessed a lot. I think I just need to make the part that absolutely requires locking extra small with a re-write.
johnnycrash
I've been using this http://asgaard.homelinux.org/svn/threadqueue/ to do similar stuff. You fire off the no. of worker threads you want, and feed them work through the threadqueue, which blocks and waits until there's data added to the queue. if you don't have more worker threads than you have cores, the contention between them should be minimal.
nos