tags:

views:

70

answers:

2

Recently we are working on migrate our software from general PC server to a kind of embedded system which use Disk on module (DOM) instead of hard disk drive.

My colleague insist that as DOM could only support about 1 million times of write operation, we should running our database entirely in a RAM disk and backup the database to DOM.

There 3 ways to trigger the backup :

  1. User trigger

  2. Every 30 minutes

  3. Every time when there is some add/update/delete operation in database

As we expecte that user will only modify the database when system is installed, I think maybe postgresql would not write that often.

But I don't know much about postgresql, I can not judge if it worth all this trouble and which approach is better.

What do you think about it?

+1  A: 

Assuming that the claim about the DOM write cycles is true, which I can't comment on, then this won't work very well. PostgreSQL assumes that it can write whatever it wants whenever it wants (even if no logical updates are happening), and you have no real chance of making it go along with the 3 triggers that you mention.

What you could do is have the entire thing run on a RAM disk and have some operating system process back this up atomically to permanent storage. This needs careful file system and kernel support. This could work if your device is on most of the time, but probably not so well if it's the sort of thing that you switch on and off like a TV, because the recovery times could be annoying.

Alternatives are using either a more embedded-like RDBMS such as SQLite, or using a storage system that can handle PostgreSQL, like the recent solid state drives, although some SSDs have bogus cache settings that might make them unsuitable for PostgreSQL.

Peter Eisentraut
+1. But you don't need careful filesystem/kernel support, since PostgreSQL supports seamless online backup: http://www.postgresql.org/docs/8.1/static/backup-online.html
j_random_hacker
@j_random_hacker But then you still have the problem that the WAL segments need to be written to backup storage all the time, which somewhat contradicts the idea of not writing there all the time.
Peter Eisentraut
I thought it wasn't necessary to continually back up the WAL logs, but on closer reading it seems it is. In that case, periodically creating SQL dumps (and piping via gzip or similar) would be a better (and simpler) way to go: http://www.postgresql.org/docs/8.1/static/backup.html#BACKUP-DUMP
j_random_hacker
+1  A: 

The problem of wearing out SSDs can be alleviated by whatever firmware the SSD has. Sometimes those chipsets don't do it well, or leave the responsibility to someone else. In this case, you can use a filesystem designed to do wear levelling by itself. UBIFS or LogFS are suitable filesystems.

Tobu
How well will PostgreSQL perform on these "nonstandard" file systems?
Peter Eisentraut
These filesystems allocate blocks like most SSDs do in firmware — http://lwn.net/Articles/353411/ . I don't know how well PostgreSQL deals with SSDs in general, or how it can be tuned, but this isn't entirely new ground.
Tobu