We have a multiuser system with users saving to a central SQL Server 2005 database. We have encountered an issue where a user refreshes in changes from the db while another user saves new data. The way we are currently collecting the changes is we have a timestamp column on each table which is filled at every row insert/update. Another user will have a timestamp stored on the client which is the last time he did a pull from the database.
Each save is done in a transaction. The example we are dealing with is as follows:
- User1 starts a save, opening a transaction and inserting/modifying rows, changing their timestamps.
- User2 refreshes from the database before Users has committed the changes, somehow causing User2's timestamp to update.
- User 1 commits the transaction and all changes.
- User2 refreshes from the database again, however because his timestamp was updated previously, only the second half of the changes committed by User1 and pulled in causing errors and application crashes.
This is making us think that timestamps aren't necessarily the best method to use to determine database changes since the last access by the front-end system. What would a better solution be?
Further example
- User1 starts a save, opening a transaction and inserting/modifying rows and updating their timestamps.
- User2 starts another save, opens a transaction, inserts/modify rows updating their timestamps, and commits his transaction.
- User3 refreshes from the database and pulls down all the data that User2 committed, updating his LastRefreshTimestamp to the last timestamp created in the db by User2.
- User1 commits his transaction.
- User 3 refreshes again from the database but is pulling all changes between the end of User2's transaction and the end of User1's transaction based on its LastRefreshTimestamp, missing out on all the changes committed by User1's transaction before User2's transaction began.