I can't find a good clean way to lock a critical section in DJango. I could use a lock or semaphore but the python implementation is for threads only, so if the production server forks then those will not be respected. Does anyone know of a way (I am thinking posix semaphores right now) to guarantee a lock across processes, or barring that a way to stop a DJango server from forking.
You could use simple file locking as a mutual exclusion mechanism, see my recipe here. It won't suit all scenarios, but then you haven't said much about why you want to use this type of locking.
I ended up going with a solution I made myself involving file locking. If anyone here ends up using it remember that advisory locks and NFS don't mix well, so keep it local. Also, this is a blocking lock, if you want to mess around with loops and constantly checking back then there is instructions in the code.
import os
import fcntl
class DjangoLock:
def __init__(self, filename):
self.filename = filename
# This will create it if it does not exist already
self.handle = open(filename, 'w')
# flock() is a blocking call unless it is bitwise ORed with LOCK_NB to avoid blocking
# on lock acquisition. This blocking is what I use to provide atomicity across forked
# Django processes since native python locks and semaphores only work at the thread level
def acquire(self):
fcntl.flock(self.handle, fcntl.LOCK_EX)
def release(self):
fcntl.flock(self.handle, fcntl.LOCK_UN)
def __del__(self):
self.handle.close()
Usage:
lock = DJangoLock('/tmp/djangolock.tmp')
lock.acquire()
try:
pass
finally:
lock.release()
You need a distributed lock manager at the point where your app suddenly needs to run on more than one service. I wrote elock for this purpose. There are bigger ones and others have chosen to ignore every suggestion and done the same with memcached.
Please don't use memcached for anything more than light advisory locking. It is designed to forget stuff.
I like to pretend like filesystems don't exist when I'm making web apps. Makes scale better.