Hi.
I'm writing a bittorrent tracker in erlang. Given the nature of the service, I won't need absolute consistency (ie. a client can be perfectly happy with a slightly outdated list of peers or torrent status).
My strategy so far has been to create mnesia tables in RAM with disc_copies enabled, so to have mnesia automatically dump the memory to disk when the log size exceeds a certain size.
If the server crashes, some information will be lost. Not a big deal.
A different approach would be to instance two tables (one ram only and one disk only) and have a process copy from ram to disk every minute or so. This is more naive, but would allow to dump just a subset of what's in memory, reducing the overall disk overhead and possibly avoid the usage of a log altogether (I'm actually not sure about this last statement).
I'm sure there are many other ways to do this. What's yours?
-teo