tags:

views:

66

answers:

1

I'm looking at porting an old driver that generates a large, complex set of tables of data into user space - because the tables have become large enough that memory consumption is serious a problem.

Since performance is critical and because there will be 16-32 simultaneous readers of the data, we thought we'd replace the old /dev based interface to the code with a shared-memory model that would allow clients to directly search the tables rather than querying the daemon directly.

The question is - what's the best way to do that? I could use shm_open() directly, but that would probably require me to devise my own record locking and even, possibly, an ISAM data structure for the shared memory.

Rather than writing my own code to re-visit the 1970s, is there a high-performance shared memory API that provides a hash based lookup mechanism? The data is completely numeric, the search keys are fixed-length bit fields that may be 8, 16, or 32 bytes long.

+2  A: 

This is something i've wanted to write for some time, but there's always some more pressing thing to do...

still, for most of the usecases of a shared key-data RAM store, memcached would be the simplest answer.

In your case, it looks like it's lower-level, so it memcached, fast as it is, might not be the best answer. I'd try Judy Arrays on a shmem block. They're really fast, so even if you wrap the access with a simplistic lock, you'd still get high performance access.

For more complex tasks, I'd search about lock-free structures (some links: 1, 2, 3,4). I even wrote one some time ago, with hopes of integrating it in a Lua kernel, but it proved really hard to keep with the existing implementation. Still, it might interest you.

Javier
FYI, the links (after "some links:") seems broken.
Fredrik
fixed, thanks..
Javier