I would like to design a database that is accessed through a very slow network link. And luckily the database itself is pretty static. So I'm going to use aggressive caching of the results. From time to time, other insertion and updates may happen on tables while the client is running, so I would like to design a low-bandwidth system where the client knows exactly when something has been updated to avoid even bothering checking the remote database.
My idea was to create a special table with two columns, one the name of the table, and another, a version number. This table would never be modified directly by application queries. It would be updated with a stored procedure. Whenever any table is modified, the stored procedure should increment the number of this special table.
The client can then store the results of the queries in a local database (say sqlite) along with the version number of the table. Then, next time runs a select on the special table and checks if the tables have changed.
How does this sound? Are there other strategies to minimize redundant database bandwidth and aggresively cache the database? The thing is going to be not only cross platform but different clients in programming languages will access it (C, C++, Obj-C, Python, etc) so I'm trying to find most simple thing to work in all cases.
Ideally I would like to design the tables to be incremental (deletes are actual inserts), so I could just query the highest ID of the table and compare to the local one. However, I don't know where to look for examples of this. Reading CouchDB's technical page makes my head explode.