tags:

views:

444

answers:

3

I can't find a good clean way to lock a critical section in DJango. I could use a lock or semaphore but the python implementation is for threads only, so if the production server forks then those will not be respected. Does anyone know of a way (I am thinking posix semaphores right now) to guarantee a lock across processes, or barring that a way to stop a DJango server from forking.

+2  A: 

You could use simple file locking as a mutual exclusion mechanism, see my recipe here. It won't suit all scenarios, but then you haven't said much about why you want to use this type of locking.

Vinay Sajip
I am doing provisioning of virtual servers. I don't want to queue it because I want the xml-rpc interface exposed through DJango to return whether or not the system has room for a new virtual server. Someone using the interface could loop through and light up ten servers quickly, so the provisioning algorithm needs to be locked as a critical section so I don't get errors in that situation.
stinkypyper
Then, as long as requesters don't mind getting locked out and retrying, it should be possible to use the file locking approach.
Vinay Sajip
A: 

I ended up going with a solution I made myself involving file locking. If anyone here ends up using it remember that advisory locks and NFS don't mix well, so keep it local. Also, this is a blocking lock, if you want to mess around with loops and constantly checking back then there is instructions in the code.

import os
import fcntl

class DjangoLock:

    def __init__(self, filename):
        self.filename = filename
        # This will create it if it does not exist already
        self.handle = open(filename, 'w')

    # flock() is a blocking call unless it is bitwise ORed with LOCK_NB to avoid blocking 
    # on lock acquisition.  This blocking is what I use to provide atomicity across forked
    # Django processes since native python locks and semaphores only work at the thread level
    def acquire(self):
        fcntl.flock(self.handle, fcntl.LOCK_EX)

    def release(self):
        fcntl.flock(self.handle, fcntl.LOCK_UN)

    def __del__(self):
        self.handle.close()

Usage:

lock = DJangoLock('/tmp/djangolock.tmp')
lock.acquire()
try:
    pass
finally:
    lock.release()
stinkypyper
+2  A: 

You need a distributed lock manager at the point where your app suddenly needs to run on more than one service. I wrote elock for this purpose. There are bigger ones and others have chosen to ignore every suggestion and done the same with memcached.

Please don't use memcached for anything more than light advisory locking. It is designed to forget stuff.

I like to pretend like filesystems don't exist when I'm making web apps. Makes scale better.

Dustin
Your bad-ass Erlang written, clean code looking, test case having, distributed file locker is clearly better then my simple non-distributed lock, so shall it be known, so shall it written. One thing though, do you have any example usage(non set-up) of it?
stinkypyper
Thanks for the review. :) The only docs I've written so far are linked from the project page. Should lead you here: http://dustin.github.com/elock/admin.html
Dustin
Now I am guessing this thing is basically HTTP (I see 200, 409), right? Can I change the post the server binds too?
stinkypyper
http semantics wouldn't work here, but it's always possible to change a port number. :)
Dustin