flock
locks don't care about threads--in fact, they don't care about processes, either. If you take the same file descriptor in two processes (inherited through a fork), either process locking the file with that FD will acquire a lock for both processes. In other words, in the following code both flock
calls will return success: the child process locks the file, and then the parent process acquires the same lock rather than blocking, because they're both the same FD.
import fcntl, time, os
f = open("testfile", "w+")
print "Locking..."
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
print "locked"
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
if os.fork() == 0:
# We're in the child process, and we have an inherited copy of the fd.
# Lock the file.
print "Child process locking..."
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
print "Child process locked..."
time.sleep(1000)
else:
# We're in the parent. Give the child process a moment to lock the file.
time.sleep(0.5)
print "Parent process locking..."
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
print "Parent process locked"
time.sleep(1000)
On the same token, if you lock the same file twice, but with different file descriptors, the locks will block each other--regardless of whether you're in the same process or the same thread. See flock(2): If a process uses open(2) (or similar) to obtain more than one descriptor for the same file, these descriptors are treated independently by flock(). An attempt to lock the file using one of these file descriptors may be denied by a lock that the calling process has already placed via another descriptor.
It's useful to remember that to the Linux kernel, processes and threads are essentially the same thing, and they're generally treated the same by kernel-level APIs. For the most part, if a syscall documents interprocess child/parent behavior, the same will hold for threads.
Of course, you can (and probably should) test this behavior yourself.