views:

186

answers:

3

AIX (and HPUX if anyone cares) have a nice little feature called msemaphores that make it easy to synchronize granular pieces (e.g. records) of memory-mapped files shared by multiple processes. Is anyone aware of something comparable in linux?

To be clear, the msemaphore functions are described by following the related links here.

A: 

Under Linux, you may be able to achieve what you want with SysV shared memory; quick googling turned up this (rather old) guide that may be of help.

andri
Thanks. Msemaphores offer some convenience and simplicity that I hoped was already implemented rather than having to build it myself.
Duck
+1  A: 

POSIX semaphores can be placed in memory shared between processes, if the second argument to sem_init(3), "pshared", is true. This seems to be the same as what msem does.

#include <semaphore.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <time.h>
#include <unistd.h>
int main() {
    void *shared;
    sem_t *sem;
    int counter, *data;
    pid_t pid;
    srand(time(NULL));
    shared = mmap(NULL, sysconf(_SC_PAGE_SIZE), PROT_READ | PROT_WRITE,
            MAP_ANONYMOUS | MAP_SHARED, -1, 0);
    sem_init(sem = shared, 1, 1);
    data = shared + sizeof(sem_t);
    counter = *data = 0;
    pid = fork();
    while (1) {
        sem_wait(sem);
        if (pid)
            printf("ping>%d %d\n", data[0] = rand(), data[1] = rand());
        else if (counter != data[0]) {
            printf("pong<%d", counter = data[0]);
            sleep(2);
            printf(" %d\n", data[1]);
        }
        sem_post(sem);
        if (pid) sleep(1);
    }
}

This is a pretty dumb test, but it works:

$ cc -o test -lrt test.c
$ ./test
ping>2098529942 315244699
pong<2098529942 315244699
pong<1195826161 424832009
ping>1195826161 424832009
pong<1858302907 1740879454
ping>1858302907 1740879454
ping>568318608 566229809
pong<568318608 566229809
ping>1469118213 999421338
pong<1469118213 999421338
ping>1247594672 1837310825
pong<1247594672 1837310825
ping>478016018 1861977274
pong<478016018 1861977274
ping>1022490459 935101133
pong<1022490459 935101133
...

Because the semaphore is shared between the two processes, the pongs don't get interleaved data from the pings despite the sleeps.

ephemient
A: 

This can be done using POSIX shared-memory mutexes:

pthread_mutexattr_t attr;
int pshared = PTHREAD_PROCESS_SHARED;
pthread_mutexattr_init(&attr);
pthread_mutexattr_setpshared(&attr, &pshared);

pthread_mutex_init(&some_shared_mmap_structure.mutex, &attr);
pthread_mutexattr_destroy(&attr);

Now you can unlock and lock &some_shared_mmap_structure.mutex using ordinary pthread_mutex_lock() etc calls, from multiple processes that have it mapped.

Indeed, you can even implement the msem API in terms of this: (untested)

struct msemaphore {
    pthread_mutex_t mut;
};

#define MSEM_LOCKED 1
#define MSEM_UNLOCKED 0
#define MSEM_IF_NOWAIT 1

msemaphore *msem_init(msemaphore *msem_p, int initialvalue) {
    pthread_mutex_attr_t attr;
    int pshared = PTHREAD_PROCESS_SHARED;

    assert((unsigned long)msem_p & 7 == 0); // check alignment

    pthread_mutexattr_init(&attr);
    pthread_mutexattr_setpshared(&attr, &pshared); // might fail, you should probably check
    pthread_mutex_init(&msem_p->mut, &attr); // never fails
    pthread_mutexattr_destroy(&attr);

    if (initialvalue)
     pthread_mutex_lock(&attr);

    return msem_p;
}

int msem_remove(msemaphore *msem) {
    return pthread_mutex_destroy(&msem->mut) ? -1 : 0;
}

int msem_lock(msemaphore *msem, int cond) {
    int ret;
    if (cond == MSEM_IF_NOWAIT)
     ret = pthread_mutex_trylock(&msem->mut);
    else
     ret = pthread_mutex_lock(&msem->mut);

    return ret ? -1 : 0;
}

int msem_unlock(msemaphore *msem, int cond) {
    // pthreads does not allow us to directly ascertain whether there are
    // waiters. However, a unlock/trylock with no contention is -very- fast
    // using linux's pthreads implementation, so just do that instead if
    // you care.
    //
    // nb, only fails if the mutex is not initialized
    return pthread_mutex_unlock(&msem->mut) ? -1 : 0;
}
bdonlan
While it is unlikely that OP requires a semaphore (a mutex is sufficient for almost all purposes), what you've implemented is *not* a semaphore. Hint: initialvalue can take on any nonnegative value, and zero means *locked*.
ephemient
However, what the OP linked to was a mutex that only called itself a semaphore - at least, from my read of the docs in question :)
bdonlan
Upon a closer re-reading, it appears that you are correct. What a misleading name!
ephemient
My thinking is that it is implemented as a semaphore though we think of and use it like a mutex. My reasons for thinking this are two fold. (1) There is no restriction that that only the process/thread that locked can unlock it, thus breaking a fundamental tenet of a mutex. (2) The MSEM_IF_WAITERS only makes sense to me if it is keeping track of this with something like sem_getvalue() [posix] or, more likely, semop() with GETNCNT/GETZCNT options [sysv]. I guess we won't know for sure until IBM or HP opens the source.
Duck
Since the default linux pthreads implementation of mutexes doesn't do any checking to ensure the owner of the lock is the one to unlock it, that part shouldn't matter :) Anyway, I don't see anything in the documentation to talk about other numbers of waiters, but you could extend this easily enough to make a semaphore with more waiters - there's a sharable condition variable option too, after all
bdonlan